Jobs
Interviews

205 Apache Flink Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

10 - 15 Lacs

bengaluru

Work from Office

The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the worlds biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualised infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customers business critical applications. Oracle Cloud Infrastructure (OCI) Security Platform & Compliance products team help customers protect their business-critical cloud infrastructure and data. We build cloud native security & compliance solutions that provide customers with visibility into the security posture of their cloud assets and help automate remediation where possible. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Kafka, Spark, Machine Learning, technologies running on OCI. Youll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Desired Skills and Experience: 4+ years of hands-on large-scale cloud application software development 1+ years of experience in cloud infrastructure security and risk assessment 1+ years of hands-on experience with three of the following technologies: Kafka, Spark, AWS/OCI, Kubernetes, Rest APIs, Linux 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming 1+ years of experience building application on OCI, AWS, Azure or GCP cloud Experience with development methodology with short release cycles Excellent problem solving and communication skills with both technical and non-technical audiences Optional Skills: Working knowledge of SSL, authentication, encryption, audit logging & access policies.

Posted Date not available

Apply

6.0 - 11.0 years

16 - 20 Lacs

hyderabad

Work from Office

As part of market leading ERP Cloud, Oracle ERP Cloud Integration & Functional Architecture teamoffers a broad suite of modules and capabilities designed to empower modern financeand deliver customer success with streamlined processes, increased productivity, and improved business decisions. The ERP Cloud Integration & Functional Architecture is looking for passionate, innovative, high caliber, team oriented super stars that seek being a major part of a transformative revolution in the development of modern business cloud based applications. We are seeking highly capable, best in the world developers, architects and technical leaders at the very top of the industry in terms of skills, capabilities and proven delivery; who seek out and implement imaginative and strategic, yet practical, solutions; people who calmly take measured and necessary risks while putting customers first. What Youll Do You would work as a Principal Applications Engineer on Oracle Next Gen solutions developed and running on Oracle Cloud. Design and build distributed, scalable, fault-tolerant software systems Build cloud services on top of modern Oracle Cloud Infrastructure Collaborate with product managers and other stakeholders in understanding the requirements and work on delivering user stories/backlog items with highest levels of quality and consistency across the product. Work with geographically dispersed team of engineers by taking complete ownership and accountability to see the project through for completion. Skills and Qualifications 7+ years in building and architecting enterprise and consumer grade applications You have prior experience working on distributed systems in the Cloud world with full stack experience. Build and Delivery of a high-quality cloud service with the capabilities, scalability, and performance needed to match the needs of enterprise teams Take the initiative and be responsible for delivering complex software by working effectively with the team and other stakeholders You feel at home communicating technical ideas verbally and in writing (technical proposals, design specs, architecture diagrams, and presentations) You are ideally proficient in Java, J2EE, SQL and server-side programming. Proficiency in other languages like Javascript is preferred. You have experience with Cloud Computing, System Design, and Object-Oriented Design preferably, production experience with Cloud. Experience working in the Apache Hadoop Community and more broadly the Big Data ecosystem communities (e.g. Apache Spark, Kafka, Flink etc.) Experience on Cloud native technologies such as AWS/Oracle Cloud/Azure/Google Cloud etc Experience building microservices and RESTful services and deep understanding of building cloud-based services Experienced at building highly available services, possessing knowledge of common service-oriented design patterns and service-to-service communication protocols Knowledge of Docker/Kubernetes is preferred Ability to work creatively and analytically using data-driven decision-making to improve customer experience. Strong organizational, interpersonal, written and oral communication skills, with proven success in contributing in a collaborative, team-oriented environment. Selfmotivated and self-driven, continuously learning and capable of working independently. BS/MS (MS preferred) in Computer Science. Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications.As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality.Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 7 years of software engineering or related experience.

Posted Date not available

Apply

3.0 - 7.0 years

0 Lacs

bengaluru

Work from Office

Title: Ops Data Engineer Location: Bangalore Key Skills Required: Recent hands-on experience with Flink, Spark SQL, and Kafka Mandatory: Strong expertise in real-time streaming data (candidates with only batch data experience will not be suitable) We are seeking a skilled Ops Data Engineer to maintain robust data infrastructure and pipelines that support our operational analytics and business intelligence needs. Candidates will bridge the gap between data engineering and operations, ensuring reliable, scalable, and efficient data systems that enable data-driven decision making across the organization. Key Responsibilities Maintain ETL/ELT pipelines using modern data engineering tools and frameworks 7 * 24 On-call support data pipeline health, performance, and SLA compliance Document data processes, schemas, and best practices SOP Implement data quality checks, monitoring, and alerting systems to ensure data reliability Optimize data pipeline performance and troubleshoot production issues Education and Experience Bachelors degree in computer science, Engineering, Mathematics, or related field 3+ years of experience in data engineering, software engineering, or related role Proven experience building and maintaining production data pipelines Required Qualifications Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems Proficiency with Version Control Git, CI/CD practices and collaborative development workflows Strong operations management and stakeholder communication skills Flexibility to work cross time zone Have cross-cultural communication mindset Experience working in cross-functional teams Continuous learning mindset and adaptability to new technologies

Posted Date not available

Apply

5.0 - 9.0 years

15 - 20 Lacs

pune

Work from Office

Pune, India Java Full Stack BCM Industry 09/05/2025 Project description In Securities Operations IT, we are looking for individuals who are passionate about what they do, who value excellence, learning and integrity. Our culture places emphasis on team work, collaboration and delivering business value while working at a sustainable pace. Most importantly we are looking for someone who is technically excellent and can aspire, and inspire others, to our values and culture. Responsibilities Design, develop, and maintain microservices using Spring Boot and Java. Implement ReactJS front-end components, ensuring responsiveness and performance. Build and maintain complex data-driven applications using MSSQL, PostgreSQL, and Redis for caching. Integrate with Apache Kafka and Apache Flink for event-stream processing and real-time data analytics. Work with Flowable BPMN, CMMN, and Decision Tables for business process automation. Write comprehensive unit and integration tests using JUnit and TestContainers for backend services. Implement UI testing with Jest and React Test for front-end components. Develop and manage cloud-based services using Azure Cloud, including Azure Blob Containers. Automate CI/CD pipelines using GitLab for continuous integration and deployment. Optimize performance, scalability, and security of applications and services. Collaborate with cross-functional teams to define system requirements and deliver solutions. Skills Must have Strong proficiency in ReactJS and modern front-end development practices. Hands-on experience in unit testing with JUnit and TestContainers, as well as UI testing using Jest and React Test. Experience with Azure Cloud services, particularly Azure Blob Storage and cloud computing best practices. Strong caching experience with Redis. Knowledge of CI/CD pipelines and automation using GitLab. Good understanding of distributed systems and message-oriented middleware. Nice to have Familiarity with Ververica Platforms for stream processing. Experience in Kubernetes and Docker for containerized application deployment. Familiarity with other cloud platform Azure. Other Languages EnglishB2 Upper Intermediate Seniority Senior

Posted Date not available

Apply

9.0 - 14.0 years

11 - 16 Lacs

bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.

Posted Date not available

Apply

7.0 - 12.0 years

8 - 18 Lacs

bengaluru, delhi / ncr, mumbai (all areas)

Work from Office

7+ years’ experience (3+ in Kafka – Apache, Confluent, MSK – & RabbitMQ) with strong skills in monitoring, optimization, and incident resolution. Proficient in brokers, connectors, Zookeeper/KRaft, schema registry, and middleware performance metrics.

Posted Date not available

Apply

8.0 - 12.0 years

40 - 60 Lacs

bengaluru

Hybrid

About the Role: We're seeking an exceptional Senior ML Platform Engineer to join our elite team in building next-generation machine learning infrastructure. You'll be at the forefront of developing scalable ML platforms that power CrowdStrike's threat detection and prevention capabilities, processing data at unprecedented scale. Role & responsibilities Architecture & Development Design and implement enterprise-scale ML infrastructure using Ray, Kubernetes, and cloud-native technologies Architect high-performance model serving solutions handling millions of predictions per second Build robust, scalable systems for model training, deployment, and monitoring Lead technical decisions for critical ML platform components MLOps & Infrastructure Develop automated ML pipelines using Airflow and MLflow Implement sophisticated monitoring and observability solutions Optimize resource utilization across distributed computing environments Design fault-tolerant, highly available ML systems Performance Engineering Optimize large-scale distributed systems for maximum throughput Implement advanced memory management strategies for ML workloads Design and optimize real-time inference systems Tune system performance for production-grade ML operations Preferred candidate profile 8+ years of software engineering experience with distributed systems 4+ years of hands-on experience building ML platforms Deep expertise in Python and modern ML infrastructure tools Proven experience with Kubernetes, containerization, and cloud platforms Strong background in performance optimization and scalability Experience with Ray, JupyterHub, MLflow, or similar ML platforms Distributed Systems: Ray, Kubernetes, Docker or similar large scale distributed systems ML Platforms: MLflow, Kubeflow, JupyterHub, Kubeflow Infrastructure: AWS/GCP/Azure, Terraform Languages: Python, Go Observability: Prometheus, Grafana CI/CD: GitLab, Jenkins What Sets You Apart: Contributions to open-source ML infrastructure projects Experience with real-time, high-throughput inference systems Background in cybersecurity or threat detection Track record of leading technical initiatives Experience with large-scale data processing systems

Posted Date not available

Apply

15.0 - 20.0 years

10 - 14 Lacs

coimbatore

Work from Office

Project Role :Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : API Management, Microsoft Azure IaaSMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Roles & Responsibilities:-7+ years in Apache Kafka/Azure Event Hub, Kafka Streams, and distributed messaging systems-Must have lead experience of handling project independently and leading project task end to end-Proficient in designing event-driven microservices and decoupled architectures using Kafka or cloud-native messaging platforms-Skilled in analyzing functional specifications and deriving technical design and implementation plans-Proficient in Java, Python, or Scala for developing and integrating event-based solutions- Expertise in stream processing with Kafka Streams, Flink, or Spark Streaming-Configure and manage Kafka clusters, topics, partitions, and replication for optimal performance and availability-Implement authentication, authorization (RBAC), and encryption (SSL/SASL) for secure Kafka communication and data protection-Hands-on with Avro/Protobuf schemas, topic partitioning, and event ordering strategies- Experience integrating Kafka with external systems via Kafka Connect or REST Proxy-Familiar with deploying and monitoring services on Kubernetes and cloud platforms like AWS or Azure-Good understanding of security, fault tolerance, and observability in event-based architectures-Knowledge on following Best practices & Guidelines-Working knowledge of Integration/API desigining- Hands-on with Infrastructure as Code (Terraform, Helm, Ansible)- Exposure to Observability Tools (Prometheus, Grafana, ELK) Additional Information:- The candidate should have minimum 7.5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

1.0 - 4.0 years

9 - 10 Lacs

bengaluru

Work from Office

Responsibilities : As an integral part of the Data Platform team, take ownership of multiple modules from design to deployment. Extensively build scalable, high-performance distributed systems that deal with large data volumes. Provide resolutions and/ or workaround to data pipeline related queries/ issues as appropriate Ensure that Ingestion pipelines that empower the Data Lake and Data Warehouses are up and running. Collaborate with different teams in order to understand / resolve data availability and consistency issues. Exhibit continuous improvement on problem resolution skills and strive for excellence What are we looking for ? Overall 1-3 years of experience in the software industry with minimum 2.5 years on Big Data and related tech stacks. Preferably from ecommerce companies. Strong Java core programming skills. Good to have programming skills in Java/Scala. Good design and documentation skills. Would have worked on Data at scale. Ability to read and write SQL - and understanding of one of Relational Databases such as MySQL, Oracle, Postgres, SQL Server. Development experience using Hadoop, Spark, Kafka, Map-Reduce, Hive, and any NoSQL databases like HBase. Exposure to tech stacks like Flink, Druid, etc. Prior exposure to building real time data pipelines would be an added advantage. Comfortable with Linux with ability to write small scripts in Bash/Python. Ability to grapple with log files and unix processes. Prior experience in working on cloud services, preferably AWS. Ability to learn complex new things quickly

Posted Date not available

Apply

2.0 - 6.0 years

6 - 10 Lacs

mumbai

Work from Office

The Role: We are looking for a highly motivated and experienced engineer to join our team in developing the next-generation AI Agent enhanced Communications platform capable of seamlessly integrating and expanding across various channels such as voice calls, mobile applications, texting, email, and social media posts. As a unified communication platform, it enables message delivery to customers and internal staff across several channels like Email, SMS, In-App messaging, and Social Media. This platform is utilized by applications that cover areas such as discovery, sales, orders, ownership, and service across all business sectors, including Vehicle, Energy, Insurance, and more. The platform guarantees the effective delivery of marketing campaigns and interactions between advisors and customers. Responsibilities: Design, development, and implementation of scalable applications that involves problem solving Must have - Leverage technologies like Golang, Apache Kafka, Postgress, Opensearch Experience with integrating with LLM and inferring responses Nice to have - Java, Apache Flink, Clickhouse Promote software engineering best practices via code reviews, building tools and documentation Leverage your existing skills while learning and implementing new, open-source technologies as Tesla grows. Work with product managers, content producers, QA engineers and release engineers to own your solution from development to production Define and develop unit tests and unit test libraries to ensure code development is robust and production ready. Drive software process improvements that enable progressively increased team efficiency. Requirements: BS or MS in Computer Science or equivalent discipline Expert Experience in developing scalable golang applications including SQL and NOSQL daatabases and other opensource technologies. Design software architecture based on business requirements, strategy, and priorities Good unit testing and integration testing practices Experience with message queue architecture Experience with Docker and Kubernetes Agile/SCRUM Software Development Process experience The Role: We are looking for a highly motivated and experienced engineer to join our team in developing the next-generation AI Agent enhanced Communications platform capable of seamlessly integrating and expanding across various channels such as voice calls, mobile applications, texting, email, and social media posts. As a unified communication platform, it enables message delivery to customers and internal staff across several channels like Email, SMS, In-App messaging, and Social Media. This platform is utilized by applications that cover areas such as discovery, sales, orders, ownership, and service across all business sectors, including Vehicle, Energy, Insurance, and more. The platform guarantees the effective delivery of marketing campaigns and interactions between advisors and customers. Responsibilities: ?Design, development, and implementation of scalable applications that involves problem solving Must have - Leverage technologies like Golang, Apache Kafka, Postgress, Opensearch Experience with integrating with LLM and inferring responses Nice to have - Java, Apache Flink, Clickhouse Promote software engineering best practices via code reviews, building tools and documentation Leverage your existing skills while learning and implementing new, open-source technologies as Tesla grows. Work with product managers, content producers, QA engineers and release engineers to own your solution from development to production Define and develop unit tests and unit test libraries to ensure code development is robust and production ready. Drive software process improvements that enable progressively increased team efficiency. Requirements: BS or MS in Computer Science or equivalent discipline Expert Experience in developing scalable golang applications including SQL and NOSQL daatabases and other opensource technologies. Design software architecture based on business requirements, strategy, and priorities Good unit testing and integration testing practices Experience with message queue architecture Experience with Docker and Kubernetes Agile/SCRUM Software Development Process experience

Posted Date not available

Apply

8.0 - 12.0 years

4 - 8 Lacs

mumbai, pune, bengaluru

Work from Office

Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs: 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languages: Scala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data tools: Hadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud services: EC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systems: Storm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks Location: Pune/ Mumbai/ Bangalore/ Chennai

Posted Date not available

Apply

5.0 - 10.0 years

7 - 15 Lacs

chennai

Work from Office

About the Role Were seeking a highly skilled Data Engineer with a strong development background and a passion for transforming data into valuable insights. The ideal candidate will play a key role in designing, building, and maintaining scalable data pipelines and analytics solutions that support critical business decisions. What youll be doing Design, build, and optimize robust data pipelines for large-scale data processing Write complex SQL queries for data extraction, transformation, and reporting Collaborate with analytics and reporting teams to deliver data-driven insights Develop scalable solutions using programming languages such as Java, Python, and Node.js Integrate APIs and third-party data sources into analytics workflows Ensure data quality, integrity, and security across all data platforms Work cross-functionally to gather requirements and deliver on key business initiatives What we expect from you 5-10 years of hands-on experience in data engineering and development Proficiency in SQL and experience with relational and non-relational databases Development experience with Java, Python, Node.js, or similar languages Familiarity with analytics platforms, reporting tools, and data warehousing Solid understanding of data modeling, ETL processes, and pipeline architecture Excellent communication skills both written and verbal Tools/Technologies you will need to know Experience with modern data platforms such as Snowflake, ClickHouse, BigQuery, or Redshift Exposure to streaming technologies like Kafka, Apache Flink, or Spark Streaming Knowledge of workflow orchestration tools like Apache Airflow or Prefect Hands-on experience with CI/CD, Docker, or Kubernetes for data deployments Familiarity with cloud environments like AWS, Azure, or Google Cloud Platform Who we are looking for A sharp problem-solver with strong technical instincts Someone who thrives in fast-paced environments and full-time development roles A clear communicator who can explain complex data concepts across teams A team player with a collaborative mindset and a passion for clean, scalable engineering

Posted Date not available

Apply

7.0 - 10.0 years

25 - 40 Lacs

pune

Work from Office

Key Skills: 7 to 10 yrs exp: Data Pipelines & ETL: Experience with Apache Flink, Kafka, and Debezium for stream processing and data movement. Databases & Storage: Strong expertise in PostgreSQL, including performance optimization and schema design. Infrastructure & Cloud: AWS services related to data (S3, Lambda, Glue, RDS, Redshift). Programming: Python, Java, or Nodejs for data processing; SQL proficiency. Event-Driven Architecture: Experience with Kafka and real-time data processing. Preferred Experience: Hands-on experience with streaming data architectures. Strong understanding of data modeling for OLTP and OLAP . Experience with data orchestration tools (e.g., Apache Airflow).

Posted Date not available

Apply

4.0 - 9.0 years

10 - 15 Lacs

pune

Hybrid

Data Engineer About VOIS VOIS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Groups partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VOIS has evolved into a global, multi-functional organization, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. VOIS India In 2009, VOIS started operating in India and now has established global delivery centers in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VOIS India supports global markets and group functions of Vodafone and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Mode: Hybrid Location: Pune Experience: 5 to 8 years Core competencies, knowledge and experience: 5-7 years’ experience in managing large data sets, simulation/ optimization and distributed computing tools. Excellent communication & presentation skills with track record of engaging with business project leads. Must have technical / professional qualifications: Experience working with large data sets, simulation/ optimization and distributed computing tools Experience in transformation data with Apache spark for Data Science activities Experience in working with distributed storage on cloud (AWS/GCP) or HDFS Experience in building data pipelines with Airflow Experience in ingesting data from different sources using Kafka/Sqoop/Flume/ Nifi Experience in solving simple to complex big data platform/framework issues Experience in building real time analytics system with Apache Spark, Flink & Kafka Experience in Scala, Python, Java & R Experience in working with NoSQL databases (Cassandra, Mongo DB, HBase, Redis) Role purpose: Primary responsibility is to define data lifecycle, including data models and data sources for analytics platform, gathering data from business and cleaning them in order to provide ready-to-work inputs for Data Scientists Apply strong expertise in in automating end to end data science pipelines & big data pipelines (Collect, ingest, store , transform and optimize scale) The incumbent will work on the assigned projects & it's stakeholder alongside Data Scientists to understand the business challenges faced by them. The work involves working with large data sets, simulation/ optimization and distributed computing tools. The candidate works with the assigned business stakeholder(s) to agree scope, deliverables, process and expected outcomes from the products and services developed. Key accountabilities and decision ownership: Understand the data science problems and design & schedule end to end pipelines For the given problem identify the right big data technologies to solve the problem in an optimized way Automate the data science pipelines, deploy ML algorithms and track the performance Build customer 360, feature store for different machine learning problems Build data model for machine learning feature store on high velocity, flexible schema databases VOIS Equal Opportunity Employer Commitment VOIS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion, Top 10 Best Workplaces for Women, Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!

Posted Date not available

Apply

4.0 - 6.0 years

2 - 6 Lacs

bengaluru

Work from Office

Strong problem-solving skills with a focus on product development. Domain expertise in Big Data, Data Platforms, and Distributed Systems. Proficiency in Java, Scala, or Python (hands-on experience with Apache Spark is essential). Experience with data ingestion frameworks such as Apache Storm, Flink, or Spark Streaming. Experience with streaming technologies like Kafka, Kinesis, Oplogs, Binlogs, or Debezium. Strong database skills with experience in HDFS, Delta Lake, Iceberg, or Lakehouse architectures.

Posted Date not available

Apply

3.0 - 6.0 years

4 - 8 Lacs

bengaluru

Work from Office

locationsIN - Bangaloreposted onPosted Today time left to applyEnd DateMay 22, 2025 (5 days left to apply) job requisition idR140300 Company Overview A.P. Moller - Maersk is an integrated container logistics company and member of the A.P. Moller Group. Connecting and simplifying trade to help our customers grow and thrive . With a dedicated team of over 95,000 employees, operating in 130 countries; we go all the way to enable global trade for a growing world . From the farm to your refrigerator, or the factory to your wardrobe, A.P. Moller - Maersk is developing solutions that meet customer needs from one end of the supply chain to the other. About the Team At Maersk, the Global Ocean Manifest team is at the heart of global trade compliance and automation. We build intelligent, high-scale systems that seamlessly integrate customs regulations across 100+ countries, ensuring smooth cross-border movement of cargo by ocean, rail, and other transport modes. Our mission is to digitally transform customs documentation, reducing friction, optimizing workflows, and automating compliance for a complex web of regulatory bodies, ports, and customs authorities. We deal with real-time data ingestion, document generation, regulatory rule engines, and multi-format data exchange while ensuring resilience and security at scale. Key Responsibilities Work with large, complex datasets and ensure efficient data processing and transformation. Collaborate with cross-functional teams to gather and understand data requirements. Ensure data quality, integrity, and security across all processes. Implement data validation, lineage, and governance strategies to ensure data accuracy and reliability. Build, optimize , and maintain ETL pipelines for structured and unstructured data , ensuring high throughput, low latency, and cost efficiency . Experience in building scalable, distributed data pipelines for processing real-time and historical data. Contribute to the architecture and design of data systems and solutions. Write and optimize SQL queries for data extraction, transformation, and loading (ETL). Advisory to Product Owners to identify and manage risks, debt, issues and opportunities for the technical improvement . Providing continuous improvement suggestions in internal code frameworks, best practices and guidelines . Contribute to engineering innovations that fuel Maersks vision and mission. Required Skills & Qualifications 4 + years of experience in data engineering or a related field. Strong problem-solving and analytical skills. E xperience on Java, Spring framework Experience in building data processing pipelines using Apache Flink and Spark. Experience in distributed data lake environments ( Dremio , Databricks, Google BigQuery , etc.) E xperience on Apache Kafka, Kafka Streams Experience working with databases. PostgreSQL preferred, with s olid experience in writin g and opti mizing SQL queries. Hands-on experience in cloud environments such as Azure Cloud (preferred), AWS, Google Cloud, etc. Experience with data warehousing and ETL processes . Experience in designing and integrating data APIs (REST/ GraphQL ) for real-time and batch processing. Knowledge on Great Expectations, Apache Atlas, or DataHub , would be a plus Knowledge on RBAC, encryption, GDPR compliance would be a plus Business skills Excellent communication and collaboration skills Ability to translate between technical language and business language, and communicate to different target groups Ability to understand complex design Possessing the ability to balance and find competing forces & opinions , within the development team Personal profile Fact based and result oriented Ability to work independently and guide the team Excellent verbal and written communication Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing . Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .

Posted Date not available

Apply

9.0 - 12.0 years

32 - 37 Lacs

gurugram

Hybrid

Primary Responsibilities: Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications: Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers

Posted Date not available

Apply

9.0 - 12.0 years

32 - 37 Lacs

gurugram

Hybrid

Primary Responsibilities: Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications: Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers

Posted Date not available

Apply

1.0 - 6.0 years

3 - 8 Lacs

bengaluru

Work from Office

We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of ~10ms Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. We use an in-house video streaming platform to support a wide variety of devices and networks. What Youll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What Were Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture

Posted Date not available

Apply

6.0 - 11.0 years

15 - 30 Lacs

kolkata, chennai, mumbai (all areas)

Hybrid

We are seeking a seasoned Back-End Engineer with a strong foundation in Java , along with hands-on experience in data engineering within cloud environments. The ideal candidate will have a deep understanding of building scalable backend systems and working with modern data platforms such as Snowflake/ OpenSearch/AWS Glue . This role combines core backend development with data pipeline engineering to support both operational and analytical needs. Responsibilities: Backend Engineering: Design, build, and maintain scalable RESTful APIs and backend services using Java . Lead and contribute to the transformation of legacy platforms to Microservices and Event-Driven Architectures (EDA) . Participate in full SDLC from architecture discussions to coding, testing, deployment, and monitoring. Implement CI/CD pipelines and deployment automation in collaboration with DevOps. Ensure best practices in coding, design, and system architecture. Write unit, integration, and end-to-end tests for automation coverage. Mentor junior engineers and promote engineering excellence. Data Engineering: Design and implement ETL/ELT pipelines for ingesting and transforming large datasets. Develop both batch and streaming data pipelines using tools like Apache Spark, Kafka, Flink , etc. Build and maintain data APIs and microservices to support analytics and reporting needs. Work with structured and semi-structured data from various sources and formats. Leverage cloud-native tools like AWS Glue / Snowflake / OpenSearch for data storage, transformation, and querying. Ensure data quality, reliability, and compliance with governance standards. Skills and Good to Have: 6+ years of backend development experience using Java . Proficiency with Micronaut , Spring Boot, or similar frameworks. Strong hands-on experience with SQL , ETL/ELT processes , and data modeling . Experience with streaming and batch processing frameworks (e.g., Kafka , Flink , Spark ). Solid knowledge of AWS services (e.g., AWS Glue , S3 , Lambda , Athena , etc.). Experience working with Snowflake , OpenSearch , or similar data platforms. Familiarity with building and consuming data APIs . Basic to intermediate understanding of database design , performance tuning, and querying. Experience working in Agile/Scrum teams and using tools like JIRA, Git, Jenkins, etc. Excellent problem-solving, communication, and collaboration skills. Good to Have: Exposure to CI/CD tools and Infrastructure as Code (IaC) . Knowledge of data security , privacy , and governance best practices . Familiarity with DevOps culture and tooling.

Posted Date not available

Apply

5.0 - 9.0 years

10 - 20 Lacs

bengaluru

Work from Office

Minimum 4+ years of development and design experience in Java/Scala with Flink, Beam (or spark streaming) and Kafka Extensive coding experience and knowledge in Event driven and streaming architecture Experience in JVM tuning for performance Knowledge on Containerisation using Docker and Kubernetes. Working knowledge on Caching systems, with particular experience using Redis will be preferable. [Nice to have] Linux OS configuration and use, including shell scripting. Good hands on experience with design patterns and their implementation. Well versed with CI/CD principles (GitHub, Jenkins etc.) , and actively involved in solving, troubleshooting issues in distributed services ecosystem Experience working with SQL and NoSQL databases Familiar with Distributed services resiliency and monitoring in a production environment. Experience in designing, building, testing and implementing security systems including identifying security design gaps in existing and proposed architectures and recommend changes or enhancements. Responsible for adhering to established policies, following best practices, developing and possessing an in-depth understanding of exploits and vulnerabilities, resolving issues by taking the appropriate corrective action. Knowledge on security controls designing Source and Data Transfers including CRON, ETLs, and JDBC-ODBC scripts. Understand basics of Networking including DNS, Proxy, ACL, Policy and troubleshooting High level knowledge of compliance and regulatory requirements of data including but not limited to encryption, anonymization, data integrity, policy control features in large scale infrastructures Understand data sensitivity in terms of logging, events and in memory data storage such as no card numbers or personally identifiable data in logs. Implements wrapper solutions for new/existing components with no/minimal security controls to ensure compliance to bank standards. Experience in Agile methodology. Ensure quality of technical and application architecture and design of systems across the organization. Effectively research and benchmark technology against other best in class technologies. Experience in Banking, Financial and Fintech experience in an enterprise environment preferred Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness, by sharing deep knowledge and experience. Self-motivator and self-starter, Ability to own and drive things without supervision and works collaboratively with the teams across the organization.

Posted Date not available

Apply

1.0 - 5.0 years

3 - 7 Lacs

bengaluru

Work from Office

Your Role and Responsibilities As a Technical Support Professional, you should have experience in a customer-facing leadership capacity. This role necessitates exceptional customer relationship management skills along with a solid technical grasp of the product/s they will support. The Technical Support Professional is expected to adeptly manage conflicting priorities, thrive under pressure, and autonomously navigate tasks with minimal active guidance. The successful applicant should possess a comprehensive understanding of IBM support, development, and service processes and deliveries. Knowledge of other IBM business procedures and professional training in mediation or conflict resolution would be advantageous. Your primary responsibilities include: Direct Problem-Solving Experience:Previous experience in addressing client issues is valuable, along with a demonstrated ability to effectively resolve problems. Strong Communication Skills: Ability to communicate clearly with both internal and external clients through spoken and written channels. Business Networking ExperienceIn-depth experience and understanding of the IBM and/or OEM support organizations, facilitating effective networking and collaboration. Excellent Coordination, Leadership & Organizational Skills: Exceptional coordination and organizational abilities, capable of leading diverse teams and multitasking within a team-based business network environment. Proficiency in project management is beneficial. Excellence in Client Service & Client Satisfaction:Personal commitment to pursuing client satisfaction and continuous improvement in the delivery of client problem resolution. Language Skills: Proficiency in English is required, with fluency in multiple languages considered advantageous. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Bachelor's Degree Experience5+ years Basic knowledge in Operating system administration (Windows, Linux) Basic knowledge in database administration (DB2, Oracle, MS SQL) EnglishFluent in speaking and writing Analytical thinking, structured problem-solving techniques Strong positive customer service attitude with sensitivity to client satisfaction. Must be a self-starter, quick learner, and enjoy working in a challenging, fast paced environment. Strong analytical and troubleshooting skills, including problem recreation, analyzing logs and traces, debugging complex issues to determine a course of action and recommend solutions. Preferred technical and professional experience Master's Degree in Information Technology Knowledge with OpenShift Knowledge with Apache Flink and Kafka Knowledge with Kibana Knowledge with Containerization and Kubernetes Knowledge with scripting (including Python, JavaScript) Knowledge with products of IBM's Digital Business Automation Product Family Knowledge with Process/Data Mining Knowledge with Containerization Basic knowledge of process/data mining Basic knowledge of LDAP Basic knowledge of AI technologies Fluent in speaking and writing in English Experience in Technical Support is a plus

Posted Date not available

Apply

4.0 - 8.0 years

7 - 12 Lacs

bengaluru

Work from Office

As a Software Development Manager, you’ll manage software development, enhance product experiences, and scale our team’s capabilities. You’ll manage careers, streamline hiring, collaborate with product, and drive innovation. We seek proactive professionals passionate about team growth, software architecture, coding, and process enhancements. Mastery of frameworks, deployment tech, and cloud APIs is essential as well as adaptability to innovative technologies. As a Software Development Manager, your primary responsibility will be to lead a development team, but this is a hands-on role. You will be individually responsible for developing and maintaining several modules of the code, and in parallel you will mentor and support less-experienced developers to enable the team collectively to deliver value-add features and fixes for each release. Together with your team, you will share responsibility for advanced customer support which requires strong debugging and problem-solving skills. In addition, you will have local management responsibility for other development resources working on different projects. For this position we need someone who can lead by example, who brings proven full stack software development skills (Java, JavaScript, React,Flink, Kafka, OpenSearch, software design, security and architecture) and who has a solid knowledge of cloud technologies, networking and Kubernetes. Your primary responsibilities include: Providing management supervision for several activities running in parallel, addressing issues/concerns with speed, and enabling corrective actions. Ensuring end-to-end development activities are properly sized, planned and tracked to meet delivery expectations supporting focus areas such as plan-interlock across functional teams & stakeholders(including dependencies, risks, mitigations) Working as a team to team to own and deliver technical solutions Keeping the focus on innovative solutions that meet desired outcomes while respecting business integrity and meeting client commitments. Cultivating a team of high-performing technical talent with the priority on delivering value. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Full stack development (6+ years)- Java, JavaScript, React, Flink, Kafka, OpenSearch, software design, security and architecture, security/SPbD Comprehensive understanding and hands-on mastery of containerized solutions including Docker and Kubernetes, as well as devOps practices, CICD, build, cloud technologies and networking Excellent people management and communication skills Source Control Management (GitHub) Project management, planning and tracking Preferred technical and professional experience Experience of OpenShift Experience in architecting, deploying, and managing applications in cloud or hybrid infrastructures Stakeholder management, conflict resolution and prioritization

Posted Date not available

Apply

2.0 - 7.0 years

4 - 7 Lacs

bengaluru

Work from Office

Educational Requirements Bachelor of Engineering Service Line Information Systems Responsibilities To design and implement real-time data streaming pipelines. Using Apache Kafka expertise and making event-driven architectures and for stream processingExperience in cloud like Azure Preferred Skills: Technology-Java-Apache-Kafka Technology-Java-Core Java Technology-Big Data - Data Processing-Spark-Apache Flink

Posted Date not available

Apply

8.0 - 10.0 years

3 - 7 Lacs

bengaluru

Work from Office

Must have : - Strong on programming languages like Python, Java - One cloud hands-on experience (GCP preferred) - Experience working with Dockers - Environments managing (e.g venv, pip, poetry, etc.) - Experience with orchestrators like Vertex AI pipelines, Airflow, etc. - Understanding of full ML Cycle end-to-end - Data engineering, Feature Engineering techniques - Experience with ML modelling and evaluation metrics - Experience with Tensorflow, Pytorch or another framework - Experience with Models monitoring - Advance SQL knowledge - Aware of Streaming concepts like Windowing , Late arrival , Triggers etc. Good to have : - Hyperparameter tuning experience. - Proficient in either Apache Spark or Apache Beam or Apache Flink. - Should have hands-on experience on Distributed computing. - Should have working experience on Data Architecture design. - Should be aware of storage and compute options and when to choose what. - Should have good understanding on Cluster Optimisation/ Pipeline Optimisation strategies. - Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). - Should have Business mindset to understand data and how it will be used for BI and Analytics purposes. - Should have working experience on CI/CD pipelines, Deployment methodologies, Infrastructure as a code (eg. Terraform). - Hands-on experience on Kubernetes. - Vector based Database like Qdrant. - LLM experience (embeddings generation, embeddings indexing, RAG, Agents, etc.). Key Responsibilities : - Design, develop, and implement AI models and algorithms using Python and Large Language Models (LLMs). - Collaborate with data scientists, engineers, and business stakeholders to define project requirements and deliver impactful AI-driven solutions. - Optimize and manage data pipelines, ensuring efficient data storage and retrieval with PostgreSQL. - Continuously research emerging AI trends and best practices to enhance model performance and capabilities. - Deploy, monitor, and maintain AI applications in production environments, adhering to best industry standards. - Document technical designs, workflows, and processes to facilitate clear knowledge transfer and project continuity. - Communicate technical concepts effectively to both technical and non-technical team members. Required Skills and Qualifications : - Proven expertise in Python programming for AI/ML applicati

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies