Jobs
Interviews

49 Flink Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we&aposve set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you&aposll make a valuable - and valued - contribution. We&aposre a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform Are you an expert with Big Data Technologies Have you looked under the hood of these systems Are you interested in Open Source If you answered Yes to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It&aposs important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company&aposs success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We&aposre independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you&aposll be part of a company that&aposs changing how the world watches TV.? We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn&apost real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002.? To learn more about Roku, our global footprint, and how we&aposve grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less

Posted 3 days ago

Apply

4.0 - 6.0 years

20 - 30 Lacs

Gurugram

Work from Office

Key Skills: Spark, Scala, Flink, Big Data, Structured Streaming, Data Architecture, Data Modeling, NoSQL, AWS, Azure, GCP, JVM tuning, Performance Optimization. Roles & Responsibilities: Design and build robust data architectures for large-scale data processing. Develop and maintain data models and database designs. Work on stream processing engines like Spark Structured Streaming and Flink. Perform analytical processing on Big Data using Spark. Administer, configure, monitor, and tune performance of Spark workloads and distributed JVM-based systems. Lead and support cloud deployments across AWS, Azure, or Google Cloud Platform. Manage and deploy Big Data technologies such as Business Data Lakes and NoSQL databases. Experience Requirements: Extensive experience working with large data sets and Big Data technologies. 4-6 years of hands-on experience in Spark/Big Data tech stack. At least 4 years of experience in Scala. At least 2+ years of experience in cloud deployment (AWS, Azure, or GCP). Successfully completed at least 2 product deployments involving Big Data technologies. Education: B.Tech M.Tech (Dual), B.Tech.

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Hiring Data Engineer in Bangalore with 6+ years experience in below skills: Must Have: - Big Data technologies: Hadoop, MapReduce, Spark, Kafka, Flink - Programming languages: Java/ Scala/ Python - Cloud: Azure, AWS, Google Cloud - Docker/Kubernetes Required Candidate profile - Strong in Communication Skills - Experience with relational SQL/ NoSQL databases- Postgres & Cassandra - Experience with ELK stack - Immediate Join is plus - Must be ready to work from office

Posted 1 week ago

Apply

8.0 - 13.0 years

35 - 45 Lacs

Chennai

Work from Office

Sr Data Engineer - Mandatory Skills 1. Java 2. Kafka 3. Apache Flink Looking for immediate Joiners - Chennai - Work from Office

Posted 1 week ago

Apply

7.0 - 12.0 years

25 - 32 Lacs

Bengaluru

Remote

Role & responsibilities Job Title: Senior Data Engineer Company: V2Soft India Location: [Remote/BLR] Work Mode: [Remote] Experience: 7+ Years Employment Type: Full-Time About the Role: V2Soft India is looking for a highly skilled and motivated Senior Data Engineer to join our growing team. You will play a critical role in designing, building, and maintaining scalable, secure, and high-performance data platforms to support cutting-edge data products and real-time streaming systems. This is a great opportunity for someone who thrives in solving complex data challenges and wants to contribute to high-impact initiatives. Key Responsibilities: Design and develop scalable, low-latency data pipelines to ingest, process, and stream massive amounts of structured and unstructured data. Collaborate cross-functionally to clean, curate, and transform data to meet business needs. Integrate privacy and security controls into CI/CD pipelines for all data flows. Embed operational excellence practices including error handling, monitoring, logging, and alerting. Continuously improve reliability, scalability, and performance of data systems while ensuring high data quality. Own KPIs related to platform performance, data delivery, and operational efficiency. Required Skills & Experience: 5+ years of hands-on experience in cloud-native, real-time data systems with strong emphasis on streaming, scalability, and reliability . Proficiency in real-time data technologies such as Apache Spark, Apache Flink, AWS Kinesis, Kafka, AWS Lambda, EMR/EKS , and Lakehouse platforms like Delta.io / Databricks . Strong expertise in AWS architecture , including infrastructure automation, CI/CD, and security best practices. Solid understanding of SQL, NoSQL, and relational databases along with SQL tuning . Proficient in Spark-Scala, PySpark, Python , and/or Java . Experience in containerized deployments using Docker, Kubernetes, Helm . Familiarity with monitoring systems for data loss detection and data quality assurance . Deep knowledge of data structures, algorithms , and data engineering design patterns. Passionate about continuous learning and delivering reliable, high-quality solutions. Nice to Have: Certifications in AWS or Big Data technologies Experience with data governance and compliance frameworks Exposure to ML pipelines or AI data workflows Why Join V2Soft? Work with cutting-edge technologies in a fast-paced and collaborative environment Opportunity to contribute to innovative, high-impact data initiatives Supportive team culture and career growth opportunities How to Apply: Submit your updated resume to [mbalaram@v2soft.com].

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 11 Lacs

Pune

Work from Office

Job Title: Java Developer Experience Required: 3+ Years Location: Pune Employment Type: Full Time, Permanent Job Summary: We are looking for a passionate and skilled Java Developer with 3 to 5 years of hands-on experience to join our high-performance engineering team. The ideal candidate should have strong fundamentals in Core Java , especially in Multithreading, Collections , and Performance Optimization . This role involves working with modern build tools and writing Java-based applications over Big Data technologies like Apache Spark and Apache Flink . Key Responsibilities: Design, develop, and maintain high-performance, scalable Java applications Write clean, maintainable, and efficient Java code Implement multithreading and concurrent programming in back-end systems Use Java APIs and Collection Frameworks effectively Manage builds and dependencies using Maven and/or Gradle Debug and resolve complex issues using tools like jstack , jcmd , etc. Perform performance tuning and JVM optimization Collaborate with cross-functional teams for feature delivery Write unit tests and follow TDD practices Utilize Git for version control and code integration Desired Skills: Strong proficiency in Core Java and Advanced Java Solid understanding of Multithreading , Concurrency , and Java Memory Model Expertise in Spring Boot , REST APIs , and Microservices architecture Hands-on with databases: SQL , NoSQL , and caching tools like Redis , Aerospike Experience with Maven , Gradle for build management Skilled in debugging with JVM tools (heap/thread dumps, jcmd, jstack) Exposure to JVM internals , Garbage Collection , and performance profiling Bonus: Experience with Go Lang , Python , or React Familiarity with Cloud platforms like AWS , GCP , or Azure Working knowledge of Docker , Kubernetes , CI/CD pipelines Good understanding of OOP principles , Design Patterns , and Clean Code practices Comfortable working in Agile/Scrum teams Preferred Skills (Value Add): Experience with logging frameworks: Log4j , SLF4J Knowledge of monitoring tools: Prometheus , Grafana , New Relic Exposure to Linux environments and basic shell scripting Hands-on experience with Microservices and RESTful APIs How to Apply: Interested candidates can share their updated resume to: anurag.yadav@softenger.com WhatsApp: +91 73855 56898 Please include the following in your message/email: Total Experience Relevant Experience Current CTC Expected CTC Notice Period Current Location Willingness to Relocate to [e.g., Pune]

Posted 2 weeks ago

Apply

2.0 - 7.0 years

3 - 8 Lacs

Pune

Work from Office

Job Title: Java Developer Experience: 2 to 7 Years Location: Pune Employment Type: Full Time Job Summary: We are looking for a passionate and skilled Java Developer with 27 years of experience to join our high-performance engineering team. The ideal candidate should have strong command over Core Java , particularly in Multithreading, Collections, Performance Optimization , and JVM tuning . You will also develop applications that work with Big Data frameworks like Spark and Flink . Key Responsibilities: Design, develop, and maintain high-performance, scalable Java applications Write clean, maintainable, and efficient Java code Build scalable backends using multithreading and concurrency Use Collection Framework and Java APIs efficiently Manage project builds using Maven or Gradle Debug production issues using jstack, jcmd, and JVM analysis tools Identify and fix performance bottlenecks Work closely with cross-functional teams to deliver features Follow best practices including unit testing and TDD Use Git for version control, commit, and merge tracking Desired Skills: Proficient in Core and Advanced Java Strong in Multithreading, Concurrency, and Java Memory Model Experience with Spring Boot, REST APIs, Microservices Hands-on with SQL/NoSQL databases and caching (Redis/Aerospike) Solid grasp of Collections and Java internals Proficient with Maven/Gradle build tools Debugging experience using JVM tools (heap/thread dumps, GC tuning) Experience with Big Data tech like Spark/Flink Familiarity with cloud platforms (AWS/GCP/Azure) Exposure to Docker, Kubernetes, CI/CD pipelines Bonus: Experience with Go Lang, Python, React Strong understanding of OOP, design patterns, clean code principles Agile/Scrum team experience Preferred / Value-Add Skills: Logging frameworks: Log4j, SLF4J Monitoring/Observability tools: Prometheus, Grafana, New Relic REST API design & microservices architecture Linux environment and shell scripting Application Instructions: Interested candidates are requested to share their updated resume along with the below details: anurag.yadav@softenger.com WhatsApp: +91 73855 56898 Please include the following in your message/email: Total Experience Relevant Experience Current CTC Expected CTC Notice Period Current Location Willing to Relocate to Pune

Posted 2 weeks ago

Apply

5.0 - 10.0 years

12 - 18 Lacs

Bengaluru

Hybrid

What Were Looking For We are seeking a Software Engineer (Backend) with strong Kotlin expertise to join our team and contribute to the design, development, and delivery of scalable, secure, and high-quality software solutions. Based in Bangalore (Hybrid 3 days in office) , you will play a key role in developing back-end systems and APIs. A strong focus on agile development, modern technologies, and best practices will be essential to succeed in this role. Key Responsibilities Develop and maintain robust, testable, and high-performing software using Test Driven Development (TDD) in Kotlin. Design, implement, and optimize microservices or serverless architectures. Build and maintain professional APIs adhering to best practices and industry standards. Collaborate with cross-functional teams to drive the detailed design of technical solutions based on business requirements and technology roadmaps. Perform code reviews to ensure maintainability, scalability, and adherence to best practices. Prepare technical documentation , including design proposals, technical specifications, and user guides. Create automated unit tests and integration tests for software components. Design, develop, and optimize event-driven architectures and pub/sub systems (e.g., Kafka, Pub/Sub). Lead rapid prototyping and proof-of-concept development to validate innovative ideas. Take ownership of projects , manage escalations, and drive continuous improvement. Implement DevOps processes , automating development, testing, and production workflows. Required Skills & Qualifications General Experience: 5+ years of proven experience as a Software Engineer in backend development , with a strong focus on Kotlin . Proven ability to thrive in an agile and fast-paced product-focused environment. A "can-do" mentality, with a passion for continuous learning and process improvement. Development Skills: Strong experience in Kotlin for backend development. Solid understanding of Java (preferred) and reactive programming . Expertise in OOP concepts, clean code practices, and software engineering principles . Strong understanding and experience in API design, microservices, and system integration . Experience in developing scalable, secure, and serverless applications . Experience in pub/sub and event-driven development (e.g., Kafka, Azure Event Hub, MQ). Experience with Flink and real-time data processing frameworks. Tools & Technologies: Hands-on experience with Docker and Kubernetes for containerization and orchestration. Experience in CI/CD tools (e.g., Jenkins, GitHub Actions, Bitbucket Pipelines). Soft Skills: Excellent analytical and problem-solving skills with an ability to simplify complex solutions. Proficiency in business reporting and technical documentation . Strong teamwork and communication skills , with fluency in English to communicate professionally. Why Join Aviato? At Aviato , we’re redefining how technology is delivered. We work on cutting-edge solutions and are looking for passionate individuals to join our team. If you thrive in a collaborative, fast-paced environment and want to contribute to the success of world-class engineering projects , we encourage you to apply.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

Chennai, Bengaluru

Work from Office

Role: Digital Twin Developer Experience: 3 to 8 years Work Mode: Hybrid Work Location: Bangalore, Chennai Job Description: 1. Simulation & Digital Twin: Omniverse, Unity 2. Programming: Python, C++ 3. 3D Modeling & USD Formats: Maya/Blender, USD, FBX, glTF 4. Sensor Simulation: LiDAR, RADAR, Cameras 5. Cloud & Streaming: Kafka, Flink, Cloud Deployment 6. ML Understanding: Ground truth data, synthetic variations 7. Spatial Mapping: HD maps, localization, coordinate systems

Posted 3 weeks ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

Kolkata, West Bengal, India

On-site

Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities.

Posted 1 month ago

Apply

4.0 - 8.0 years

0 - 1 Lacs

Hyderabad, Bengaluru

Hybrid

Role & responsibilities The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time Build functionality for data analytics, search and aggregation Preferred candidate profile Minimum 2 years of experience in Big Data technologies Hands-on experience with the Hadoop stack HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines. Bachelor’s degree and year of work experience of 4 to 6 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on Azure Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work.

Posted 1 month ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Pune, Maharashtra, India

On-site

REQUIRED SKILLS & QUALIFICATIONS TECHNICAL SKILLS: Cloud & Data Lake: Azure Data Lake (ADLS Gen2), Databricks, Delta Lake, Iceberg Reporting tools: PowerBI, Tableau or similar toolset Streaming & Messaging: Confluent Kafka, Apache Flink, Azure Event Hubs Big Data Processing: Apache Spark, Databricks, Flink SQL, Delta Live Tables Programming: Python (PySpark, Pandas), SQL Storage & Formats: Parquet, Avro, ORC, JSON Data Modeling: Dimensional modeling, Data Vault, Lakehouse architecture MINIMUM QUALIFICATIONS 8 + years of end-to-end design and architecture of enterprise level data platform and reporting/analytical solutions. 5+ years of expertise in real-time and batch reporting, analytical solution architecture. 4+ years of experience with PowerBI, Tableau or similar technology solutions 3+ years of experience with design and architecture with big data solution. 3+ years of hands-on experience in enterprise level streaming data solution with Python, Kafka/Flink and Iceberg. ADDITIONAL QUALIFICATIONS 8 + years of experience with Dimensional modeling and data lake design methodologies. 8+ years of experience with Relational and Non-relational databases (e.g. SQL Server, Cosmos, etc.) 3 + years of experience with readiness, provisioning, security, and best practices with Azure data platform and orchestration with Data Factory. Experience with working with business stakeholders, requirements & use case analysis. Strong communication and collaboration skills with creative problem-solving skills. PREFERRED QUALIFICATIONS Bachelors degree in computer science or equivalent work experience. Experience with Agile/Scrum methodology. Experience with tax and accounting domain a plus. Azure Data Engineer certification a plus.

Posted 1 month ago

Apply

6.0 - 7.0 years

11 - 14 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Location: Remote / Pan India,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Notice Period: Immediate iSource Services is hiring for one of their client for the position of Java kafka developer. We are seeking a highly skilled and motivated Confluent Certified Developer for Apache Kafka to join our growing team. The ideal candidate will possess a deep understanding of Kafka architecture, development best practices, and the Confluent platform. You will be responsible for designing, developing, and maintaining scalable and reliable Kafka-based data pipelines and applications. Your expertise will be crucial in ensuring the efficient and robust flow of data across our organization. Develop Kafka producers, consumers, and stream processing applications. Implement Kafka Connect connectors and configure Kafka clusters. Optimize Kafka performance and troubleshoot related issues. Utilize Confluent tools like Schema Registry, Control Center, and ksqlDB. Collaborate with cross-functional teams and ensure compliance with data policies. Qualifications: Bachelors degree in Computer Science or related field. Confluent Certified Developer for Apache Kafka certification. Strong programming skills in Java/Python. In-depth Kafka architecture and Confluent platform experience. Experience with cloud platforms and containerization (Docker, Kubernetes) is a plus. Experience with data warehousing and data lake technologies. Experience with CI/CD pipelines and DevOps practices. Experience with Infrastructure as Code tools such as Terraform, or CloudFormation.

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 7 Lacs

Bengaluru, Karnataka, India

On-site

Your Job: Understand the business case and translate to a holistic a solution involving AWS Cloud Services , PySpark, EMR, Python, Data Ingestion and Cloud DB Redshift / Postgres PL/SQL development for high volume data sets. Experience in preparing data warehouse design artifacts based on given requirements (ETL framework design, data modeling, source-target-mapping), DB query monitoring for tuning and optimization opportunities Proven experience with large, complex database projects in environments producing high-volume data Demonstrated problem solving skills; familiarity with various root cause analysis methods; experience in documenting identified problems and determined resolutions. Makes recommendations regarding enhancements and/or improvements Provides appropriate consulting, interfacing, and standards relating to database management, and monitors transaction activity and utilization. Performance issues analysis and Tuning Data Warehouse design and development, including logical and physical schema design. Other Responsibilities: Perform all activities in a safe and responsible manner and support all Environmental, Health, Safety & Security requirements and programs Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams Required Qualifications: Bachelor or master's degree in information technology, Electrical Engineering or similar relevant fields. Proven experience (3 years minimum) with ETL development, design, performance tuning and optimization Very good knowledge of data warehouse architecture approaches and trends, and high interest to apply and further develop that knowledge, including understanding of Dimensional Modelling and ERD design approaches, Working Experience in Kubernetes and Docker Administration is added advantage Good experience in AWS Services, Big data, PySpark, EMR, Python, Cloud DB RedShift Proven experience with large, complex database projects in environments producing high-volume data, Proficiency in SQL and PL/SQL Experience in preparing data warehouse design artifacts based on given requirements (ETL framework design, data modeling, source-target-mapping), Experience in developing streaming applications e.g. SAP Data Intelligence, Spark Streaming, Flink, Storm, etc. Excellent conceptual abilities pared with very good technical documentation skills, e.g. ability to understand and document complex data flows as part of business / production processes, infrastructure. Familiarity with SDLC concepts and processes

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 22 Lacs

Bengaluru

Work from Office

P1: HTML, ReactJS, Javascript, Typescript, modular CSS, UNIT testing, Redux, UI debugging skills, Writing efficient & quality code, integrating with BE APIs, Figma or similar design tool for UI-Mocks. P2: Logging & Monitoring tools like Quantum Metrics, Splunk, Grafana etc., UI performance engineering & security.

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Title: Senior Data Engineer (4-6 Years Experience) Location: Kotak Life HO Department: Data Science & Analytics Employment Type: Full-Time About the Role: We are seeking a highly skilled Data Engineer with 4-6 years of hands-on experience in designing and developing scalable, reliable, and efficient data solutions. The ideal candidate will have a strong background in cloud platforms (AWS or Azure), experience in building both batch and streaming data pipelines, and familiarity with modern data architectures including event-driven and medallion architectures. Key Responsibilities: .Design, build, and maintain scalable data pipelines (batch and streaming) to process structured and unstructured data from various sources. .Develop and implement solutions based on event-driven architectures using technologies like Kafka, Event Hubs, or Kinesis. .Architect and manage data workflows based on the Medallion architecture (Bronze, Silver, Gold layers). .Work with cloud platforms (AWS or Azure) to manage data infrastructure and storage, compute, and orchestration services. .Leverage cloud-native or open-source tools for data transformation, orchestration, monitoring, and quality checks. .Collaborate with data scientists, analysts, and product manager to deliver high-quality data solutions. .Ensure best practices in data governance, security, lineage, and observability. Required Skills & Qualifications: .4-6 years of professional experience in data engineering or related roles. .Strong experience in cloud platforms: AWS (e.g., S3, Glue, Lambda, Redshift) or Azure (e.g., Data Lake, Synapse, Data Factory, Functions). .Proven expertise in building batch and streaming pipelines using tools like Spark, Flink, Kafka, Kinesis, or similar. .Practical knowledge of event-driven architectures and experience with message/event brokers. .Hands-on experience implementing Medallion architecture or similar layered data architectures. .Familiarity with data orchestration tools (e.g., Airflow, Azure Data Factory, AWS Step Functions). .Proficiency in SQL, Python, or Scala for data processing and pipeline development. .Exposure to open-source tools in the modern data stack (e.g., dbt, Delta Lake, Apache Hudi, Great Expectations). Preferred Qualifications: .Experience with containerization and CI/CD for data workflows (Docker, GitHub Actions, etc.). .Knowledge of data quality frameworks and observability tooling. .Experience with Delta Lake or Lakehouse implementations. .Strong problem-solving skills and ability to work in fast-paced environments. What We Offer:

Posted 1 month ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

cloud-based big data projects (preferably in either AWS or Azure), cloud-based data warehouse like (Greenplum, RedShift, Azure SQL Data Warehouse, etc.), Data analysis products, BI systems, or data mining products, AWS-based DevOps, test, and solution design, Cloud-Native product testing, Data Lake technologies, data storage formats (Parquet, ORC, Avro),query engines (Athena, Presto, Dremio), data pipelines (Flink, Spark) Added advatages: Bash, Python, GoLang,smart home deployment,user cases, building pipelines to ingest terabytes of data spanning billions of rows, Spirent Testcenter/Ixia Network, Pre-sales experiences,Machine Learning experience is a strong plus

Posted 1 month ago

Apply

9.0 - 15.0 years

9 - 15 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Compliance Engineering is a global team comprising over 300 engineers and scientists dedicated to solving the most complex, mission-critical problems. We: Build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. Have access to the latest technology and to massive amounts of structured and unstructured data. Leverage modern frameworks to build responsive and intuitive front-end and Big Data applications. The firm is making a significant investment to uplift and rebuild the Compliance application portfolio in 2023. To achieve this, Compliance Engineering is looking to fill several full-stack engineer roles across different teams. How You Will Fulfill Your Potential As a member of our team, you will: Partner globally with sponsors, users, and engineering colleagues across multiple divisions to plan and execute engineering projects and drive our product roadmaps. Have responsibility for managing and leading a team of 8+ junior and senior software developers across 1-3 global locations. Be instrumental in implementing processes and procedures in order to maximize the quality and efficiency of the team. Manage significant projects and be involved in the full life cycle: scoping, designing, implementing, testing, deploying, and maintaining software systems across our products. Work closely with engineers to review the DB design, queries, and other ETL processes . Leverage various technologies including Java, Flink, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes, and exposure to various SQL (preferably Postgresql)/NO-SQL databases. Be able to innovate and incubate new ideas . Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. 9+ years of experience in software development , including management experience. Experience in developing and designing end-to-end solutions to enterprise standards , including automated testing and SDLC. Sound knowledge of DBMS concepts, database architecture , and experienced in ETL/data pipeline development. Experience in query tuning/optimization . The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Knowledge of the financial industry is desirable but not essential. Desired Experience (Can Set You Apart From Other Candidates) Experience in some of the following is desired and can set you apart from other candidates: UI/UX development . API design , such as to create interconnected services. Message buses or real-time processing . Relational databases . Knowledge of the financial industry and compliance or risk functions. Influencing stakeholders .

Posted 1 month ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Compliance Engineering is a global team comprising over 300 engineers and scientists dedicated to solving the most complex, mission-critical problems. We: Build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. Have access to the latest technology and to massive amounts of structured and unstructured data. Leverage modern frameworks to build responsive and intuitive front-end and Big Data applications. The firm is making a significant investment to uplift and rebuild the Compliance application portfolio in 2023. To achieve this, Compliance Engineering is looking to fill several full-stack engineer roles across different teams. How You Will Fulfill Your Potential As a member of our team, you will: Partner globally with sponsors, users, and engineering colleagues across multiple divisions to plan and execute engineering projects and drive our product roadmaps. Have responsibility for managing and leading a team of 8+ junior and senior software developers across 1-3 global locations. Be instrumental in implementing processes and procedures in order to maximize the quality and efficiency of the team. Manage significant projects and be involved in the full life cycle: scoping, designing, implementing, testing, deploying, and maintaining software systems across our products. Work closely with engineers to review the DB design, queries, and other ETL processes . Leverage various technologies including Java, Flink, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes, and exposure to various SQL (preferably Postgresql)/NO-SQL databases. Be able to innovate and incubate new ideas . Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. 9+ years of experience in software development , including management experience. Experience in developing and designing end-to-end solutions to enterprise standards , including automated testing and SDLC. Sound knowledge of DBMS concepts, database architecture , and experienced in ETL/data pipeline development. Experience in query tuning/optimization . The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Knowledge of the financial industry is desirable but not essential. Desired Experience (Can Set You Apart From Other Candidates) Experience in some of the following is desired and can set you apart from other candidates: UI/UX development . API design , such as to create interconnected services. Message buses or real-time processing . Relational databases . Knowledge of the financial industry and compliance or risk functions. Influencing stakeholders .

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 12 Lacs

Pune, Chennai, Bengaluru

Hybrid

Hello Candidates, We are Hiring !! Job Position - Data Streaming Engineer Experience - 5+ years Location - Mumbai, Pune , Chennai , Bangalore Work mode - Hybrid ( 3 days WFO) JOB DESCRIPTION Request for Data Streaming Engineer Data Streaming @ offshore : • Flink , Python Language. • Data Lake Systems. (OLAP Systems). • SQL (should be able to write complex SQL Queries) • Orchestration (Apache Airflow is preferred). • Hadoop (Spark and Hive: Optimization of Spark and Hive apps). • Snowflake (good to have). • Data Quality (good to have). • File Storage (S3 is good to have) NOTE - Candidates can share their resume on - shrutia.talentsketchers@gmail.com

Posted 1 month ago

Apply

13.0 - 20.0 years

40 - 45 Lacs

Bengaluru

Work from Office

Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.

Posted 1 month ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karn?taka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelines mandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring) Training & Certification . Apache Kafka Administration Snowflake Fundamentals/Advanced Training . Experience 8 years of experience in a technical role working with AWS At least 2 years in a leadership or management role About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 1 month ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are looking for a software engineer to join OCI security & compliance platform team. The platform and algorithms monitor, detect threats, data breaches, and other malicious activities using machine learning and data science technologies. These services help organizations in maintaining their security and compliance posture. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Spark, Kafka, Machine Learning, technologies running on OCI. You'll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Career Level - IC2 . Develop highly available and scalable platform that aggregates and analyzes streams of events with small window of durability . Design, deploy and manage large scale data systems and services built on OCI . Develop, maintain and tune threat detection algorithms . Develop test bed and tools to help reduce noise and improve time to detect threats Desired Skills and Experience: . 1+ years of hands-on large-scale cloud application software development . 1+ years of experience in cloud infrastructure security and risk assessment . 1+ years of hands-on experience with three of the following technologies: Kafka, Radis, AWS, Kubernetes, Rest APIs, Linux . 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming . 1+ years of experience building application on Oracle Cloud Infrastructure . Critical thinking: ability to track down complex data and engineering issues, and analyze data to solve problems . Experience with development methodology with short release cycles. . Excellent problem solving and communication skills with both technical and non-technical audiences.

Posted 1 month ago

Apply

6.0 - 10.0 years

10 - 17 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Job Description: We are looking for a skilled Data / Analytics Engineer with hands-on experience in vector databases and search optimization techniques . You will help build scalable, high-performance infrastructure to support AI-powered applications like semantic search , recommendation systems , and RAG pipelines . Key Responsibilities: Optimize vector search algorithms for performance and scalability. Build pipelines to process high-dimensional embeddings (e.g., BERT , CLIP , OpenAI ). Implement ANN indexing techniques like HNSW , IVF , PQ . Integrate vector search with data platforms and APIs . Collaborate with cross-functional teams (data scientists, engineers, product). Monitor and resolve latency , throughput , and scaling issues. Must-Have Skills: Python AWS Vector Databases (e.g., Elasticsearch , FAISS , Pinecone ) Vector Search / Similarity Search ANN Search Algorithms HNSW , IVF , PQ Snowflake / Databricks Embedding Models – BERT , CLIP , OpenAI Kafka / Flink for real-time data pipelines REST APIs , GraphQL , or gRPC for integration Good to Have: Knowledge of semantic caching and hybrid retrieval Experience with distributed systems and high-performance computing Familiarity with RAG (Retrieval-Augmented Generation) workflows Apply Now if You: Enjoy solving performance bottlenecks in AI infrastructure Love working with cutting-edge ML models and search technologies Thrive in collaborative , fast-paced environments

Posted 1 month ago

Apply

6.0 - 11.0 years

6 - 11 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Strong experience with Java backend development , experience with large data processing applications using Flink/Beam Experience with GCP will be a plus. Experience Big Query or Oracle is needed Location: Virtual Experience : 6-9 Yrs Skills: Java, Apache Flink/Storm/Beam and GCP Note: Looking for Immediate to 30-Days joiners at most.

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies