Jobs
Interviews

205 Apache Flink Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

11 - 16 Lacs

noida, uttar pradesh

Work from Office

About the Role: This position requires someone to work on complex technical projects and closely work with peers in an innovative and fast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Requirements: Minimum 6-8 years of experience in Big Data technologies. The position Grow our analytics capabilities with faster, more reliable tools, handling petabytes of data every day. Brainstorm and create new platforms that can help in our quest to make available to cluster users in all shapes and forms, with low latency and horizontal scalability. Make changes to our diagnosing any problems across the entire technical stack. Design and develop a real-time events pipeline for Data ingestion for real-time dash- boarding. Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake. Design & implement new components and various emerging technologies in Hadoop Eco- System, and successful execution of various projects. Be a brand ambassador for Paytm Stay Hungry, Stay Humble, Stay Relevant! Preferred Qualification : Bachelor's/Master's Degree in Computer Science or equivalent Skills that will help you succeed in this role: Fluent with Strong hands-on experience with Hadoop, MapReduce, Hive, Spark, PySpark etc. Excellent programming/debugging skills in Python/Java/Scala. Experience with any scripting language such as Python, Bash etc. Good to have experience of working with noSQL databases like Hbase, Cassandra. Hands-on programming experience with multithreaded applications. Good to have experience in Database, SQL, messaging queues like Kafka. Good to have experience in developing streaming applications e.g. Spark Streaming, Flink, Storm, etc. Good to have experience with AWS and cloud technologies such as S3Experience with caching architectures like Redis etc. Why join us: Because you get an opportunity to make a difference, and have a great time doing that. You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve. You should work with us if you think seriously about what technology can do for people. We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. To know more about exiting work we do:

Posted Date not available

Apply

3.0 - 8.0 years

11 - 16 Lacs

noida, bengaluru, uttar pradesh

Work from Office

Job Summary: Build systems for collection & transformation of complex data sets for use in production systems Collaborate with engineers on building & maintaining back-end services Implement data schema and data management improvements for scale and performance Provide insights into key performance indicators for the product and customer usage Serve as team's authority on data infrastructure, privacy controls and data security Collaborate with appropriate stakeholders to understand user requirements Support efforts for continuous improvement, metrics and test automation Maintain operations of live service as issues arise on a rotational, on-call basis Verify whether data architecture meets security and compliance requirements and expectations .Should be able to fast learn and quickly adapt at rapid pace. java/scala, SQL, Minimum Qualifications: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 3+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Python, Scala. Strong SQL language and should be able to write complex queries. Strong Airflow like orchestration tools. Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience with streaming technologies such as Apache Spark, Kafka, Flink. Backend experience including Apache Cassandra, MongoDB and relational databases such as Oracle, PostgreSQL AWS/GCP solid hands on with 4+ years of experience. Strong communication and soft skills. Knowledge and/or experience with containerized environments, Kubernetes, docker. Experience in implementing and maintained highly scalable micro services in Rest, Spring Boot, GRPC. Appetite for trying new things and building rapid POCs" Key Responsibilities : Design, develop, and maintain scalable data pipelines to support data ingestion, processing, and storage Implement data integration solutions to consolidate data from multiple sources into a centralized data warehouse or data lake Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications Ensure data quality and integrity by implementing robust data validation and cleansing processes Optimize data pipelines for performance, scalability, and reliability. Develop and maintain ETL (Extract, Transform, Load) processes using tools such as Apache Spark, Apache NiFi, or similar technologies .Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal downtimeImplement best practices for data management, security, and complianceDocument data engineering processes, workflows, and technical specificationsStay up-to-date with industry trends and emerging technologies in data engineering and big data.

Posted Date not available

Apply

4.0 - 9.0 years

5 - 9 Lacs

bengaluru

Work from Office

Job TitleSenior DataLake Implementation Specialist Experience: 1012+ Years Location: Bangalore Type: Full-time / Contract Notice Period: Immediate Job Summary: We are looking for a highly experienced and sharp DataLake Implementation Specialist to lead and execute scalable data lake projects using technologies such as Apache Hudi, Hive, Python, Spark, Flink , and cloud-native tools on AWS or Azure . The ideal candidate must have deep expertise in designing and optimizing modern data lake architectures with strong programming skills and data engineering capabilities. Key Responsibilities: Design, develop, and implement robust data lake architectures on cloud platforms (AWS/Azure). Implement streaming and batch data pipelines using Apache Hudi , Apache Hive, and cloud-native services like AWS Glue , Azure Data Lake , etc. Architect and optimize ingestion, compaction, partitioning, and indexing strategies in Apache Hudi . Develop scalable data transformation and ETL frameworks using Python , Spark , and Flink . Work closely with DataOps/DevOps to build CI/CD pipelines and monitoring tools for data lake platforms. Ensure data governance, schema evolution handling, lineage tracking, and compliance. Collaborate with analytics and BI teams to deliver clean, reliable, and timely datasets. Troubleshoot performance bottlenecks in big data processing workloads and pipelines. Must-Have Skills: 4+ years hands-on experience in Data Lake and Data Warehousing solutions 3+ years experience with Apache Hudi , including insert/upsert/delete workflows, clustering, and compaction strategies Strong hands-on experience in AWS Glue , AWS Lake Formation , or Azure Data Lake / Synapse 6+ years of coding experience in Python , especially in data processing 2+ years working experience in Apache Flink and/or Apache Spark Sound knowledge of Hive , Parquet/ORC formats , and DeltaLake vs Hudi vs Iceberg Strong understanding of schema evolution , data versioning , and ACID guarantees in data lakes Nice to Have: Experience with Apache Iceberg , Delta Lake Familiarity with Kinesis , Kafka , or any streaming platform Exposure to dbt , Airflow , or Dagster Experience in data cataloging , data governance tools , and column-level lineage tracking Education & Certifications: Bachelors or Masters degree in Computer Science, Information Technology, or related field Relevant certifications in AWS Big Data , Azure Data Engineering , or Databricks

Posted Date not available

Apply

2.0 - 5.0 years

27 - 40 Lacs

gurugram

Work from Office

Xtelify Ltd || Senior Data Engineer Xtelify Ltd is looking for a Data Engineer to join the Data Platform team who can help develop and deploy data pipelines at a huge scale of ~5B daily events and concurrency of 500K users. The platform is built on a cloud-native modern data stack (AWS/GCP), enabling real-time reporting and deep data exploration from first principles. Experience: 25 Years Job Location: Gurgaon Responsibilities: • Create and maintain a robust, scalable, and optimized data pipeline. • Handle TBs of data daily across Xtelify's music and video streaming platforms. • Extract and consume data from live systems to analyze and operate in a 99.999% SLA Big Data environment. • Build and execute data mining and modeling activities using agile development techniques. • Solve problems creatively and communicate effectively across teams. • Build infrastructure for optimal data extraction, transformation, and loading from diverse sources. • Drive internal process improvements: automation, data delivery optimization, infrastructure redesign, etc. • Manage both structured and unstructured datasets using SQL and NoSQL databases. • Understand and leverage cloud computation models to enhance delivery and deployment of solutions. • Drive innovation in areas like analytics pipelines, streaming systems, and machine learning data applications. Desired Profile: • B.E / B.Tech / M.E / M.Tech / M.S in Computer Science or Software Engineering from a premier institute. • 2+ years of hands-on experience with Spark and Scala. • Strong grasp of data structures and algorithms. • Proficient with Hadoop, Pig, Hive, Storm, and SQL. • Programming experience in at least one language: Java, Scala, or Python. • Solid experience in big data infrastructure, distributed systems, dimensional modeling, query processing & optimization, and relational databases. • Own end-to-end delivery of data modules/features. Good to Have: • Exposure to NoSQL solutions like Cassandra, MongoDB, CouchDB, Postgres, Redis, Elasticsearch. • Experience with Kafka and Spark Streaming/flink. • Familiarity with OLAP engines like Apache Druid or Apache Pinot. • Enthusiasm to explore and learn modern data architectures and streaming systems Role & responsibilities Preferred candidate profile

Posted Date not available

Apply

7.0 - 12.0 years

15 - 16 Lacs

mumbai, bengaluru

Work from Office

Java 8 (minimum 6 years) Apache Flink in a production environment (Mandatory 3+ Years) Spring Boot, Microservices, and REST API AWS cloud services MySQL, PostgreSQL, or Oracle Agile/Scrum methodologies

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies