Jobs
Interviews

92 Flink Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

5 - 15 Lacs

Chennai, Bengaluru

Work from Office

Role: Digital Twin Developer Experience: 3 to 8 years Work Mode: Hybrid Work Location: Bangalore, Chennai Job Description: 1. Simulation & Digital Twin: Omniverse, Unity 2. Programming: Python, C++ 3. 3D Modeling & USD Formats: Maya/Blender, USD, FBX, glTF 4. Sensor Simulation: LiDAR, RADAR, Cameras 5. Cloud & Streaming: Kafka, Flink, Cloud Deployment 6. ML Understanding: Ground truth data, synthetic variations 7. Spatial Mapping: HD maps, localization, coordinate systems

Posted 2 months ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

Kolkata, West Bengal, India

On-site

Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities.

Posted 2 months ago

Apply

4.0 - 8.0 years

0 - 1 Lacs

Hyderabad, Bengaluru

Hybrid

Role & responsibilities The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time Build functionality for data analytics, search and aggregation Preferred candidate profile Minimum 2 years of experience in Big Data technologies Hands-on experience with the Hadoop stack HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines. Bachelor’s degree and year of work experience of 4 to 6 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on Azure Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work.

Posted 2 months ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Pune, Maharashtra, India

On-site

REQUIRED SKILLS & QUALIFICATIONS TECHNICAL SKILLS: Cloud & Data Lake: Azure Data Lake (ADLS Gen2), Databricks, Delta Lake, Iceberg Reporting tools: PowerBI, Tableau or similar toolset Streaming & Messaging: Confluent Kafka, Apache Flink, Azure Event Hubs Big Data Processing: Apache Spark, Databricks, Flink SQL, Delta Live Tables Programming: Python (PySpark, Pandas), SQL Storage & Formats: Parquet, Avro, ORC, JSON Data Modeling: Dimensional modeling, Data Vault, Lakehouse architecture MINIMUM QUALIFICATIONS 8 + years of end-to-end design and architecture of enterprise level data platform and reporting/analytical solutions. 5+ years of expertise in real-time and batch reporting, analytical solution architecture. 4+ years of experience with PowerBI, Tableau or similar technology solutions 3+ years of experience with design and architecture with big data solution. 3+ years of hands-on experience in enterprise level streaming data solution with Python, Kafka/Flink and Iceberg. ADDITIONAL QUALIFICATIONS 8 + years of experience with Dimensional modeling and data lake design methodologies. 8+ years of experience with Relational and Non-relational databases (e.g. SQL Server, Cosmos, etc.) 3 + years of experience with readiness, provisioning, security, and best practices with Azure data platform and orchestration with Data Factory. Experience with working with business stakeholders, requirements & use case analysis. Strong communication and collaboration skills with creative problem-solving skills. PREFERRED QUALIFICATIONS Bachelors degree in computer science or equivalent work experience. Experience with Agile/Scrum methodology. Experience with tax and accounting domain a plus. Azure Data Engineer certification a plus.

Posted 2 months ago

Apply

6.0 - 7.0 years

11 - 14 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Location: Remote / Pan India,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Notice Period: Immediate iSource Services is hiring for one of their client for the position of Java kafka developer. We are seeking a highly skilled and motivated Confluent Certified Developer for Apache Kafka to join our growing team. The ideal candidate will possess a deep understanding of Kafka architecture, development best practices, and the Confluent platform. You will be responsible for designing, developing, and maintaining scalable and reliable Kafka-based data pipelines and applications. Your expertise will be crucial in ensuring the efficient and robust flow of data across our organization. Develop Kafka producers, consumers, and stream processing applications. Implement Kafka Connect connectors and configure Kafka clusters. Optimize Kafka performance and troubleshoot related issues. Utilize Confluent tools like Schema Registry, Control Center, and ksqlDB. Collaborate with cross-functional teams and ensure compliance with data policies. Qualifications: Bachelors degree in Computer Science or related field. Confluent Certified Developer for Apache Kafka certification. Strong programming skills in Java/Python. In-depth Kafka architecture and Confluent platform experience. Experience with cloud platforms and containerization (Docker, Kubernetes) is a plus. Experience with data warehousing and data lake technologies. Experience with CI/CD pipelines and DevOps practices. Experience with Infrastructure as Code tools such as Terraform, or CloudFormation.

Posted 2 months ago

Apply

3.0 - 7.0 years

5 - 7 Lacs

Bengaluru, Karnataka, India

On-site

Your Job: Understand the business case and translate to a holistic a solution involving AWS Cloud Services , PySpark, EMR, Python, Data Ingestion and Cloud DB Redshift / Postgres PL/SQL development for high volume data sets. Experience in preparing data warehouse design artifacts based on given requirements (ETL framework design, data modeling, source-target-mapping), DB query monitoring for tuning and optimization opportunities Proven experience with large, complex database projects in environments producing high-volume data Demonstrated problem solving skills; familiarity with various root cause analysis methods; experience in documenting identified problems and determined resolutions. Makes recommendations regarding enhancements and/or improvements Provides appropriate consulting, interfacing, and standards relating to database management, and monitors transaction activity and utilization. Performance issues analysis and Tuning Data Warehouse design and development, including logical and physical schema design. Other Responsibilities: Perform all activities in a safe and responsible manner and support all Environmental, Health, Safety & Security requirements and programs Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams Required Qualifications: Bachelor or master's degree in information technology, Electrical Engineering or similar relevant fields. Proven experience (3 years minimum) with ETL development, design, performance tuning and optimization Very good knowledge of data warehouse architecture approaches and trends, and high interest to apply and further develop that knowledge, including understanding of Dimensional Modelling and ERD design approaches, Working Experience in Kubernetes and Docker Administration is added advantage Good experience in AWS Services, Big data, PySpark, EMR, Python, Cloud DB RedShift Proven experience with large, complex database projects in environments producing high-volume data, Proficiency in SQL and PL/SQL Experience in preparing data warehouse design artifacts based on given requirements (ETL framework design, data modeling, source-target-mapping), Experience in developing streaming applications e.g. SAP Data Intelligence, Spark Streaming, Flink, Storm, etc. Excellent conceptual abilities pared with very good technical documentation skills, e.g. ability to understand and document complex data flows as part of business / production processes, infrastructure. Familiarity with SDLC concepts and processes

Posted 2 months ago

Apply

5.0 - 8.0 years

20 - 22 Lacs

Bengaluru

Work from Office

P1: HTML, ReactJS, Javascript, Typescript, modular CSS, UNIT testing, Redux, UI debugging skills, Writing efficient & quality code, integrating with BE APIs, Figma or similar design tool for UI-Mocks. P2: Logging & Monitoring tools like Quantum Metrics, Splunk, Grafana etc., UI performance engineering & security.

Posted 3 months ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Title: Senior Data Engineer (4-6 Years Experience) Location: Kotak Life HO Department: Data Science & Analytics Employment Type: Full-Time About the Role: We are seeking a highly skilled Data Engineer with 4-6 years of hands-on experience in designing and developing scalable, reliable, and efficient data solutions. The ideal candidate will have a strong background in cloud platforms (AWS or Azure), experience in building both batch and streaming data pipelines, and familiarity with modern data architectures including event-driven and medallion architectures. Key Responsibilities: .Design, build, and maintain scalable data pipelines (batch and streaming) to process structured and unstructured data from various sources. .Develop and implement solutions based on event-driven architectures using technologies like Kafka, Event Hubs, or Kinesis. .Architect and manage data workflows based on the Medallion architecture (Bronze, Silver, Gold layers). .Work with cloud platforms (AWS or Azure) to manage data infrastructure and storage, compute, and orchestration services. .Leverage cloud-native or open-source tools for data transformation, orchestration, monitoring, and quality checks. .Collaborate with data scientists, analysts, and product manager to deliver high-quality data solutions. .Ensure best practices in data governance, security, lineage, and observability. Required Skills & Qualifications: .4-6 years of professional experience in data engineering or related roles. .Strong experience in cloud platforms: AWS (e.g., S3, Glue, Lambda, Redshift) or Azure (e.g., Data Lake, Synapse, Data Factory, Functions). .Proven expertise in building batch and streaming pipelines using tools like Spark, Flink, Kafka, Kinesis, or similar. .Practical knowledge of event-driven architectures and experience with message/event brokers. .Hands-on experience implementing Medallion architecture or similar layered data architectures. .Familiarity with data orchestration tools (e.g., Airflow, Azure Data Factory, AWS Step Functions). .Proficiency in SQL, Python, or Scala for data processing and pipeline development. .Exposure to open-source tools in the modern data stack (e.g., dbt, Delta Lake, Apache Hudi, Great Expectations). Preferred Qualifications: .Experience with containerization and CI/CD for data workflows (Docker, GitHub Actions, etc.). .Knowledge of data quality frameworks and observability tooling. .Experience with Delta Lake or Lakehouse implementations. .Strong problem-solving skills and ability to work in fast-paced environments. What We Offer:

Posted 3 months ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

cloud-based big data projects (preferably in either AWS or Azure), cloud-based data warehouse like (Greenplum, RedShift, Azure SQL Data Warehouse, etc.), Data analysis products, BI systems, or data mining products, AWS-based DevOps, test, and solution design, Cloud-Native product testing, Data Lake technologies, data storage formats (Parquet, ORC, Avro),query engines (Athena, Presto, Dremio), data pipelines (Flink, Spark) Added advatages: Bash, Python, GoLang,smart home deployment,user cases, building pipelines to ingest terabytes of data spanning billions of rows, Spirent Testcenter/Ixia Network, Pre-sales experiences,Machine Learning experience is a strong plus

Posted 3 months ago

Apply

9.0 - 15.0 years

9 - 15 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Compliance Engineering is a global team comprising over 300 engineers and scientists dedicated to solving the most complex, mission-critical problems. We: Build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. Have access to the latest technology and to massive amounts of structured and unstructured data. Leverage modern frameworks to build responsive and intuitive front-end and Big Data applications. The firm is making a significant investment to uplift and rebuild the Compliance application portfolio in 2023. To achieve this, Compliance Engineering is looking to fill several full-stack engineer roles across different teams. How You Will Fulfill Your Potential As a member of our team, you will: Partner globally with sponsors, users, and engineering colleagues across multiple divisions to plan and execute engineering projects and drive our product roadmaps. Have responsibility for managing and leading a team of 8+ junior and senior software developers across 1-3 global locations. Be instrumental in implementing processes and procedures in order to maximize the quality and efficiency of the team. Manage significant projects and be involved in the full life cycle: scoping, designing, implementing, testing, deploying, and maintaining software systems across our products. Work closely with engineers to review the DB design, queries, and other ETL processes . Leverage various technologies including Java, Flink, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes, and exposure to various SQL (preferably Postgresql)/NO-SQL databases. Be able to innovate and incubate new ideas . Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. 9+ years of experience in software development , including management experience. Experience in developing and designing end-to-end solutions to enterprise standards , including automated testing and SDLC. Sound knowledge of DBMS concepts, database architecture , and experienced in ETL/data pipeline development. Experience in query tuning/optimization . The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Knowledge of the financial industry is desirable but not essential. Desired Experience (Can Set You Apart From Other Candidates) Experience in some of the following is desired and can set you apart from other candidates: UI/UX development . API design , such as to create interconnected services. Message buses or real-time processing . Relational databases . Knowledge of the financial industry and compliance or risk functions. Influencing stakeholders .

Posted 3 months ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Compliance Engineering is a global team comprising over 300 engineers and scientists dedicated to solving the most complex, mission-critical problems. We: Build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. Have access to the latest technology and to massive amounts of structured and unstructured data. Leverage modern frameworks to build responsive and intuitive front-end and Big Data applications. The firm is making a significant investment to uplift and rebuild the Compliance application portfolio in 2023. To achieve this, Compliance Engineering is looking to fill several full-stack engineer roles across different teams. How You Will Fulfill Your Potential As a member of our team, you will: Partner globally with sponsors, users, and engineering colleagues across multiple divisions to plan and execute engineering projects and drive our product roadmaps. Have responsibility for managing and leading a team of 8+ junior and senior software developers across 1-3 global locations. Be instrumental in implementing processes and procedures in order to maximize the quality and efficiency of the team. Manage significant projects and be involved in the full life cycle: scoping, designing, implementing, testing, deploying, and maintaining software systems across our products. Work closely with engineers to review the DB design, queries, and other ETL processes . Leverage various technologies including Java, Flink, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes, and exposure to various SQL (preferably Postgresql)/NO-SQL databases. Be able to innovate and incubate new ideas . Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. 9+ years of experience in software development , including management experience. Experience in developing and designing end-to-end solutions to enterprise standards , including automated testing and SDLC. Sound knowledge of DBMS concepts, database architecture , and experienced in ETL/data pipeline development. Experience in query tuning/optimization . The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Knowledge of the financial industry is desirable but not essential. Desired Experience (Can Set You Apart From Other Candidates) Experience in some of the following is desired and can set you apart from other candidates: UI/UX development . API design , such as to create interconnected services. Message buses or real-time processing . Relational databases . Knowledge of the financial industry and compliance or risk functions. Influencing stakeholders .

Posted 3 months ago

Apply

5.0 - 10.0 years

10 - 12 Lacs

Pune, Chennai, Bengaluru

Hybrid

Hello Candidates, We are Hiring !! Job Position - Data Streaming Engineer Experience - 5+ years Location - Mumbai, Pune , Chennai , Bangalore Work mode - Hybrid ( 3 days WFO) JOB DESCRIPTION Request for Data Streaming Engineer Data Streaming @ offshore : • Flink , Python Language. • Data Lake Systems. (OLAP Systems). • SQL (should be able to write complex SQL Queries) • Orchestration (Apache Airflow is preferred). • Hadoop (Spark and Hive: Optimization of Spark and Hive apps). • Snowflake (good to have). • Data Quality (good to have). • File Storage (S3 is good to have) NOTE - Candidates can share their resume on - shrutia.talentsketchers@gmail.com

Posted 3 months ago

Apply

13.0 - 20.0 years

40 - 45 Lacs

Bengaluru

Work from Office

Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.

Posted 3 months ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karn?taka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelines mandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring) Training & Certification . Apache Kafka Administration Snowflake Fundamentals/Advanced Training . Experience 8 years of experience in a technical role working with AWS At least 2 years in a leadership or management role About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 3 months ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are looking for a software engineer to join OCI security & compliance platform team. The platform and algorithms monitor, detect threats, data breaches, and other malicious activities using machine learning and data science technologies. These services help organizations in maintaining their security and compliance posture. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Spark, Kafka, Machine Learning, technologies running on OCI. You'll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Career Level - IC2 . Develop highly available and scalable platform that aggregates and analyzes streams of events with small window of durability . Design, deploy and manage large scale data systems and services built on OCI . Develop, maintain and tune threat detection algorithms . Develop test bed and tools to help reduce noise and improve time to detect threats Desired Skills and Experience: . 1+ years of hands-on large-scale cloud application software development . 1+ years of experience in cloud infrastructure security and risk assessment . 1+ years of hands-on experience with three of the following technologies: Kafka, Radis, AWS, Kubernetes, Rest APIs, Linux . 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming . 1+ years of experience building application on Oracle Cloud Infrastructure . Critical thinking: ability to track down complex data and engineering issues, and analyze data to solve problems . Experience with development methodology with short release cycles. . Excellent problem solving and communication skills with both technical and non-technical audiences.

Posted 3 months ago

Apply

6.0 - 10.0 years

10 - 17 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Job Description: We are looking for a skilled Data / Analytics Engineer with hands-on experience in vector databases and search optimization techniques . You will help build scalable, high-performance infrastructure to support AI-powered applications like semantic search , recommendation systems , and RAG pipelines . Key Responsibilities: Optimize vector search algorithms for performance and scalability. Build pipelines to process high-dimensional embeddings (e.g., BERT , CLIP , OpenAI ). Implement ANN indexing techniques like HNSW , IVF , PQ . Integrate vector search with data platforms and APIs . Collaborate with cross-functional teams (data scientists, engineers, product). Monitor and resolve latency , throughput , and scaling issues. Must-Have Skills: Python AWS Vector Databases (e.g., Elasticsearch , FAISS , Pinecone ) Vector Search / Similarity Search ANN Search Algorithms HNSW , IVF , PQ Snowflake / Databricks Embedding Models – BERT , CLIP , OpenAI Kafka / Flink for real-time data pipelines REST APIs , GraphQL , or gRPC for integration Good to Have: Knowledge of semantic caching and hybrid retrieval Experience with distributed systems and high-performance computing Familiarity with RAG (Retrieval-Augmented Generation) workflows Apply Now if You: Enjoy solving performance bottlenecks in AI infrastructure Love working with cutting-edge ML models and search technologies Thrive in collaborative , fast-paced environments

Posted 3 months ago

Apply

6.0 - 11.0 years

6 - 11 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Strong experience with Java backend development , experience with large data processing applications using Flink/Beam Experience with GCP will be a plus. Experience Big Query or Oracle is needed Location: Virtual Experience : 6-9 Yrs Skills: Java, Apache Flink/Storm/Beam and GCP Note: Looking for Immediate to 30-Days joiners at most.

Posted 3 months ago

Apply

2.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

About the Position This is an opportunity for Engineering Managers to join our Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As the manager of the Data Foundations team in the Data Platform Group, your team will be responsible for designing, building, and deploying the foundational systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake and are now looking to adopt GCP. We are seeking an Engineering Manager with a strong technical background and excellent communication skills to join us and partner with senior leadership as a thought leader in our strategic Data & ML projects. Our platform projects have a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the data solutions to these problems. What you will be doing: Recruit and mentor a globally distributed and talented group of diverse employees Collaborate with Product, Design, QA, Documentation, Customer Support, Program Management, TechOps, and other scrum teams. Engage in technical design and discussions and also help drive technical architecture Ensure the happiness and productivity of the team s software engineers Communicate the vision of our product to external entities Help mitigate risk (technical, product, personnel) Utilize professional acumen to improve Okta s technology, product, and engineering Participate in relevant Engineering workgroups and on-call rotations Foster, enable and promote innovation Define team metrics and meet productivity goals of the organization Cloud infrastructure cost tracking and management in partnership with Okta s FinOps team What you will bring to the role: A track record of leading or managing high performing platform teams (2 year experience minimum) Experience with end to end project delivery; building roadmaps through operational sustainability Strong facilitation skills (design, requirements gathering, progress and status sessions) Production experience with distributed systems running in AWS. GCP a bonus Passion about automation and leveraging agile software development methodologies Prior experience with Data Platform Prior experience in software development with hands-on experience as an IC using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Developing and tuning highly scalable distributed systems Experience with reliability engineering specifically in areas such as data quality, data observability and incident management And extra credit if you have experience in any of the following! Deep Data & ML experience Multi-cloud experience Federal cloud environments / Fedramp Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop

Posted 3 months ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Hybrid

About the Team The Data Platform team is responsible for the foundational data services, systems, and data products for Okta that benefit our users. Today, the Data Platform team solves challenges and enables: Streaming analytics Interactive end-user reporting Data and ML platform for Okta to scale Telemetry of our products and data Our elite team is fast, creative and flexible. We encourage ownership. We expect great things from our engineers and reward them with stimulating new projects, new technologies and the chance to have significant equity in a company. Okta is about to change the cloud computing landscape forever. About the Position This is an opportunity for experienced Software Engineers to join our fast growing Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As a member of the Data Platform team, you will be responsible for designing, building, and deploying the systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake. We are looking for experienced Software Engineers who can help design and own the building, deploying and optimizing the streaming infrastructure. This project has a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the solutions to these problems. Job Duties and Responsibilities: Design, implement and own data-intensive, high-performance, scalable platform components Work with engineering teams, architects and cross functional partners on the development of projects, design, and implementation Conduct and participate in design reviews, code reviews, analysis, and performance tuning Coach and mentor engineers to help scale up the engineering organization Debug production issues across services and multiple levels of the stack Required Knowledge, Skills, and Abilities: 5+ years of experience in object-oriented language, preferably Java Hands-on experience using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Experience in developing and tuning highly scalable distributed systems Excellent grasp of software engineering principles Solid understanding of multithreading, garbage collection and memory management Experience with reliability engineering specifically in areas such as data quality, data observability and incident management Nice to have Maintained security, encryption, identity management, or authentication infrastructure Leveraged major public cloud providers to build mission-critical, high volume services Hands-on experience in developing Data Integration applications for large scale (petabyte scale) environments with experience in both batch and online systems. Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop Experience developing Kubernetes based services on AWS Stack.

Posted 3 months ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Bengaluru

Work from Office

We are looking for experienced Software Engineers who can help design and own the building, deploying and optimizing the streaming infrastructure. This project has a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the solutions to these problems. Job Duties and Responsibilities: Design, implement and own data-intensive, high-performance, scalable platform components Work with engineering teams, architects and cross functional partners on the development of projects, design, and implementation Conduct and participate in design reviews, code reviews, analysis, and performance tuning Coach and mentor engineers to help scale up the engineering organization Debug production issues across services and multiple levels of the stack Required Knowledge, Skills, and Abilities: 2+ years of experience of software development Proficient in at least one language while comfortable in more than one of the backend languages, preferably Java or Typescript, Ruby, GoLang, Python. Have experience working with at least one of the database technologies - MySQL, Redis, or PostgreSQL. Demonstrable knowledge of computer science fundamentals with strong API Design skills. Comfortable working on a geographically distributed extended team. Brings the right attitude to the team: ownership, accountability, attention to detail, and customer focus. Track record of delivering work incrementally to get feedback and iterating over solutions. Comfortable in React or similar front-end UI stacks; if not comfortable yet, you are willing to learn Nice to have Experience using a cloud-based distributed computing technologies such as Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Maintained security, encryption, identity management, or authentication infrastructure Leveraged major public cloud providers to build mission-critical, high volume services Hands-on experience in developing Data Integration applications for large scale (petabyte scale) environments with experience in both batch and online systems. Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop

Posted 3 months ago

Apply

9.0 - 12.0 years

0 - 3 Lacs

Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 11 The Team: Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: The work you do will be used every single day, its the essential code youll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. Whats in it for you: Build a career with a global company. Work on code that fuels the global financial markets. Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 9-12 years of experience designing/building data-intensive solutions using distributed computing. Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design and writing robust maintainable architectures and APIs. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient programming skills at a high-level languages - Java, Scala, Python Solid knowledge of at least one machine learning research frameworks Familiarity with containerization, scripting, cloud platforms, and CI/CD. 5+ years experience with Python, Java, Kubernetes, and data and workflow orchestration tools 4+ years experience with Elasticsearch, SQL, NoSQL,??Apache spark, Flink, Databricks and Mlflow. Prior experience with operationalizing data-driven pipelines for large scale batch and stream processing analytics solutions Good to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e.g., Lucene Query Syntax) and experience with data indexing and retrieval. Experience with machine learning models and NLP techniques for search relevance and ranking. Familiarity with vector search techniques and embedding models (e.g., BERT, Word2Vec). Experience with relevance tuning using A/B testing frameworks. Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python, Java Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor.

Posted 3 months ago

Apply

8.0 - 13.0 years

40 - 65 Lacs

Bengaluru

Work from Office

About the team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. We’ve done this – with zero downtime! Sounds impossible? Well, that’s the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. We’ve taken steps to inculcate a strong ‘Founder’s Mindset’ across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As Engineering Manager, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we aren’t building unparalleled tech solutions, you can find us debating the plot points of our favourite books and games – or even gossipping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the role We are looking for a seasoned Engineering Manager well-versed with emerging technologies to join our team. As an Engineering Manager, you will ensure consistency and quality by shaping the right strategies. You will keep an eye on all engineering projects and ensure all duties are fulfilled. You will analyse other employees’ tasks and carry on collaborations effectively. You will also transform newbies into experts and build reports on the progress of all projects What you will do Design tasks for other engineers, keeping Meesho’s guidelines and standards in mind Keep a close look on various projects and monitor the progress Drive excellence in quality across the organisation and solutioning of product problems Collaborate with the sales and design teams to create new products Manage engineers and take ownership of the project while ensuring product scalability Conduct regular meetings to plan and develop reports on the progress of projects What you will need Bachelor's / Master’s in computer science At least 8+ years of professional experience At least 4+ years’ experience in managing software development teams Experience in building large-scale distributed Systems Experience in Scalable platforms Expertise in Java/Python/Go-Lang and multithreading Good understanding on Spark and internals Deep understanding of transactional and NoSQL DBs Deep understanding of Messaging systems – Kafka Good experience on cloud infrastructure - AWS preferably Ability to drive sprints and OKRs with good stakeholder management experience. Exceptional team managing skills Experience in managing a team of 4-5 junior engineers Good understanding on Streaming and real time pipelines Good understanding on Data modelling concepts, Data Quality tools Good knowledge in Business Intelligence tools Metabase, Superset, Tableau etc. Good to have knowledge - Trino, Flink, Presto, Druid, Pinot etc. Good to have knowledge - Data pipeline building

Posted 3 months ago

Apply

10.0 - 20.0 years

10 - 20 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Job description Should have extensive experience on the technical architecture, configuration options, and customization capabilities development experience for about 10-12 years Proven experience of successful implementation end to end IoT based solution in the industry like manufacturing, Retail & Pharma Experience in IoT based Smart City solution implementing End to End using Garnet Framewok Proficiency in cloud platforms (AWS, Azure, Google Cloud) Sound knowledge of microservices architecture and containerization (Docker, Kubernetes). Strong development skills using languages (Python,Java,C#,C++,Node.js,JSON, XML, and binary data formats), data processing (Kafka, Spark & Flink) and network configuration. Good to have proficiency in Big Data having hands on knowledge in Scala, and R for data analysis and manipulation and knowledge of NoSQL databases (e.g., MongoDB, Cassandra) and relational databases (e.g., MySQL, PostgreSQL) for data storage and retrieval .

Posted 3 months ago

Apply

5.0 - 10.0 years

0 - 4 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

HOW YOU WILL FULFILL YOUR POTENTIAL As a member of our team, you will: Partner globally with sponsors,usersand engineering colleagues across multiple divisions to plan and execute engineering projects and drive our product roadmaps, Have responsibility for managing and leadinga team of 8+junior and seniorsoftware developers across 1-3global locations. Be instrumental in implementing processes and procedures in order to maximize the quality and efficiency of the team. Manage significant projects and be involved inthe full life cycle; scoping,designing, implementing, testing, deploying, and maintaining software systems acrossour products. Work closely with engineers to review the DB design, queries and other ETL processes. Leverage varioustechnologies including; Java,Flink, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes and exposure to various SQL (preferably Postgresql)/NO-SQL databases . Be able to innovate and incubate new ideas. QUALIFICATIONS A successful candidate will possess the followingattributes: A Bachelor's or Master's degreein Computer Science, Computer Engineering, or a similar field of study. 9+ years of experience in software development includingmanagement experience. Experience in developing and designingend-to-end solutions toenterprise standardsincluding automated testingand SDLC. Sound knowledge of DBMS concepts, database architecture, experienced in ETL/data pipeline development. Experience in query tuning/optimization The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper.? Knowledge of financial industry is desirable but not essential. Experience in some ofthe following is desired and can set you apart from other candidates: UI/UX development API design, such as to create interconnected services, message buses or real time processing, relational databases knowledge of the financial industry and compliance or risk functions, influencingstakeholders.

Posted 3 months ago

Apply

4.0 - 8.0 years

5 - 9 Lacs

Hyderabad, Bengaluru

Work from Office

Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing Be a part of an Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities What are the next steps? Register on our Soul AI website

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies