Jobs
Interviews

11 Hudi Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As an Advisory Consultant at Dell Technologies, you will play a crucial role in delivering consultative business and technical services for complex customer-facing consulting engagements related to digital transformation. Your responsibilities will involve collaborating with Global Pre-Sales, Account Management, and Solutioning Teams to deploy, administer, and configure digital transformation software stacks. Being one of the senior technical members in the Digital Transformation Practice, you will earn customer trust through competence, technical acumen, consulting expertise, and partnership. You will guide and oversee other team members, providing technical grooming activities for their skill development. Your role will require expert customer-facing skills, leadership qualities, and the ability to communicate technical processes effectively. Your key responsibilities will include exploring customers" Data and Analytics opportunities, driving digital transformation within customer organizations, architecting unified Data Management strategies, and implementing end-to-end data engineering pipelines. Additionally, you will collaborate with various stakeholders to support deal closures and contribute to the growth of the practice. To excel in this role, you should have over 12 years of experience in the IT industry, preferably with a degree in computer science or engineering. You must possess a minimum of 5 years of hands-on experience with big data technologies like Hadoop and Spark, strong programming skills in languages such as Python, Java, or Scala, and proficiency in SQL and query optimization. Experience in developing cloud-based applications, working with different databases, and familiarity with message formats and distributed querying solutions will be essential for success. Desirable qualifications include experience with containerization technologies like Docker and Kubernetes, as well as engaging with pre-sales and sales teams to create solutions for customers seeking digital transformation and AI/Edge solutions. At Dell Technologies, we believe in the power of each team member to make a significant impact. If you are eager to grow your career with cutting-edge technology and join a diverse and innovative team, we invite you to be part of our journey to build a future that benefits everyone. Application closing date: 1 May 2025,

Posted 2 weeks ago

Apply

8.0 - 10.0 years

20 - 22 Lacs

gurugram

Work from Office

About the Role We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data platforms that empower data-driven decision-making. This role requires deep technical expertise in modern data engineering frameworks, architectural patterns, and cloud-native solutions on AWS. You will be a key contributor to our data strategy, ensuring data quality, governance, and reliability while mentoring other engineers in the team. Key Responsibilities Design, develop, and own robust, scalable, and maintainable data pipelines (batch & real-time). Architect and implement Data Lake, Data Warehouse, and Lakehouse solutions using modern frameworks and architectural patterns. Ensure data quality, governance, and integrity across the entire data lifecycle. Monitor, troubleshoot, and optimize the performance of data pipelines. Contribute to and enforce best practices, design principles, and technical documentation. Partner with cross-functional teams to translate business requirements into effective technical solutions. Provide mentorship and guidance to junior data engineers, fostering continuous learning and growth. Good to Have Skills: Bachelors degree in Computer Science, Information Systems, or related field (Masters degree preferred). 8+ years of experience as a Data Engineer, with a proven track record of building large-scale, production-grade pipelines. Expertise in AWS Data Services (S3, Glue, Athena, EMR, Kinesis, etc.). Strong proficiency in SQL and deep understanding of file formats (Parquet, Delta Lake, Apache Iceberg, Hudi, CDC patterns). Hands-on experience with stream processing frameworks (Apache Flink, Kafka Streams, or PySpark). Proficiency in Apache Airflow or similar workflow orchestration tools. Strong knowledge of database systems (relational & NoSQL) and data warehousing concepts. Experience with data integration tools and cloud-based data platforms. Excellent problem-solving skills and ability to work independently in fast-paced environments. Strong communication and collaboration skills to work effectively with both technical and business stakeholders. Passion for emerging technologies and keeping pace with industry best practices.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

india

On-site

When 5% of Indian households shop with us, it's important to build data-backed, resilient systems to manage millions of orders every day. We've done this - with zero downtime! ???? Sounds impossible Well, that's the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. We've taken steps to inculcate a strong Founder's Mindset across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. Tech Culture We have a unique tech culture where engineers are seen as problem solvers. The engineering org is divided into multiple pods and each pod is aligned to a particular business theme. It is a culture driven by logical debates & arguments rather than authority. At Meesho, you get to solve hard technical problems at scale as well as have a significant impact on the lives of millions of entrepreneurs. You are expected to contribute to the Solutioning of product problems as well as challenge existing solutions. Meesho's user base has grown 4x in the last 1 year and we have more than 50 million downloads of our app. Here are a few projects we have completed last year to scale oursystems for this growth: . We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. . Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of 10ms . Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. . We use an in-house video streaming platform to support a wide variety of devices and networks. What You'll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What We're Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture About us Welcome to Meesho, where every story begins with a spark of inspiration and a dash of entrepreneurial spirit. We're not just a platform we're your partner in turning dreams into realities. Curious about life at Meesho Explore our Glassdoor - our people have a lot to say and they've helped us become a loved workplace in India. Our Mission Democratising internet commerce for everyone - Meesho (Meri Shop) started with a single idea in mind: to be an e-commerce destination for Indian consumers and to enable small businesses to succeed online. We provide our sellers with benefits such as zero commission and affordable shipping solutions in the market. Today, sellers nationwide are growing their businesses by tapping into Meesho's large and diverse customer base, state-of-the-art tech infrastructure, and pan-India logistics network through trusted third-party partners. Affordable, relatable merchandise that mirrors local markets has helped us connect with internet users and serve customers across urban, semi-urban, and rural India. Our unique business model and continuous innovation have established us as a part of India's e-commerce ecosystem. Culture and Total Rewards Our focus is on cultivating a dynamic workplace characterized by high impact and performance excellence. We prioritize a people-centric culture, dedicated to hiring and developing exceptional talent. Total rewards at Meesho comprise a comprehensive set of elements - monetary, non-monetary, tangible, and intangible. Our 9 guiding principles, or Mantras, are the backbone of how we operate, influencing everything from recognition and evaluation to growth discussions. Daily rituals and processes like Problem First Mindset, Listen or Die, our Internal Mobility Program, Talent Reviews, and Continuous Performance Management embody these principles. We offer competitive compensation - both cash and equity-based - tailored to job roles, individual experience, and skill, along with employee-centric benefits and a supportive work environment. Our holistic wellness program, MeeCare, includes benefits across physical, mental, financial, and social wellness. This includes extensive medical insurance for employees and their families, wellness initiatives like telehealth, wellness events, and fitness-related perks. To support work-life balance, we offer generous leave policies, parental support, retirement benefits, and learning and development assistance. Through personalized recognition, gratitude for stretched work, and engaging activities, we promote employee delight at the workplace. Additional benefits such as salary advance support, relocation assistance, and flexible benefit plans further enrich the Meesho experience. Know more about Meesho here :

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should have a Bachelors Degree in Computer Science, Computer Engineering or related technical field, while a Masters Degree or other advanced degree is preferred. With 4-6+ years of total experience, you should possess at least 2+ years of relevant experience in Big Data platforms. Your skill set should include strong analytical, problem solving, and communication/articulation skills. Furthermore, you are expected to have 3+ years of experience with big data and the Hadoop ecosystem, including Spark, HDFS, Hive, Sqoop, Hudi, Parquet, Apache Nifi, and Kafka. Proficiency in Scala/Spark is required, and knowledge of Python is considered a plus. Hands-on experience with Oracle and MS-SQL databases is essential. In addition, you should have experience working with job schedulers like CA or AutoSys, as well as familiarity with source code control systems such as Git, Jenkins, and Artifactory. Experience with platforms like Tableau and AtScale will be an advantage in this role.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a highly experienced and strategic Senior Software Developer with a deep expertise in full stack development and cloud-native solutions on Google Cloud Platform (GCP) or any ETL & Data Warehousing platform, your role will be instrumental in shaping the engineering direction of the organization. You will be responsible for driving architectural decisions, mentoring senior engineers, and ensuring the delivery of scalable, secure, and high-performing solutions across the platform. Your responsibilities will span across the entire stack, involving the crafting of backend experiences, building robust backend APIs, and designing cloud infrastructure. You will play a critical role in influencing the technical vision, driving innovation, and aligning engineering efforts with business goals. Working within the Marketplace Seller Acquisition and Onboarding team, you will be at the forefront of building core platforms and services to facilitate Walmart in delivering a vast selection at competitive prices with a superior seller onboarding experience. This involves enabling third-party sellers to list, sell, and manage their products on walmart.com. Your focus will be on managing the entire seller lifecycle, monitoring customer experience, and providing valuable insights to sellers for assortment planning, pricing, and inventory management. Key responsibilities include leading the design and development of end-to-end ETL applications with high scalability and resilience, architecting complex cloud-native systems utilizing GCP services, setting technical direction, defining best practices, and driving engineering excellence. Additionally, you will guide the adoption of serverless and container-based architectures, champion CI/CD pipelines and Infrastructure as Code (IaC), drive code quality through design reviews and automated testing, and collaborate cross-functionally to translate business requirements into scalable tech solutions. To excel in this role, you should bring at least 5 years of experience in ETL development, deep proficiency in JavaScript/TypeScript, Python, or Go, strong experience with modern frontend frameworks (React preferred), expertise in designing and operating cloud-native systems, proficiency with microservices architecture, Docker, Kubernetes, and event-driven systems, extensive experience in CI/CD and DevOps practices, familiarity with SQL and NoSQL databases, exceptional communication, leadership, and collaboration skills, experience with serverless platforms, and exposure to large-scale data processing pipelines or ML workflows on GCP. Joining Walmart Global Tech offers you the opportunity to work in an environment where your contributions can impact the lives of millions of people. The team consists of software engineers, data scientists, cybersecurity experts, and service professionals who are driving the next wave of retail disruption. By fostering a people-led and tech-empowered culture, Walmart Global Tech provides opportunities for career growth and innovation at scale. In addition to a competitive compensation package, benefits include incentive awards, maternity and parental leave, PTO, health benefits, and more. Walmart aims to create a culture where every associate feels valued and respected, fostering a sense of belonging and creating opportunities for all associates, customers, and suppliers. As an Equal Opportunity Employer, Walmart, Inc. values diversity and inclusivity in the workplace. By understanding, respecting, and valuing unique experiences and identities, Walmart creates a welcoming environment where all individuals can thrive and contribute to the success of the organization.,

Posted 1 month ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

???? Were Hiring: Senior Data Engineer 7+ Years Experience ???? Location: Gurugram, Haryana, India ???? Duration: 6 Months C2H (Contract to Hire) ???? Apply Now: [HIDDEN TEXT] ???? What Were Looking For: ? 7+ years of experience in data engineering ? Strong expertise in building scalable, robust batch and real-time data pipelines ? Proficiency in AWS Data Services (S3, Glue, Athena, EMR, Kinesis, etc.) ? Advanced SQL skills and deep knowledge of file formats: Parquet, Delta Lake, Iceberg, Hudi ? Hands-on experience with CDC patterns ? Experience with stream processing (Apache Flink , Kafka Streams) and distributed frameworks like PySpark ? Expertise in Apache Airflow for workflow orchestration ? Solid foundation in data warehousing concepts and experience with both relational and NoSQL databases ? Strong communication and problem-solving skills ? Passion for staying up to date with the latest in the data tech landscape Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

What are we looking for Must have experience with at least one cloud platform (AWS, GCP, or Azure) AWS preferred Must have experience with lakehouse-based systems such as Iceberg, Hudi, or Delta Must have experience with at least one programming language (Python, Scala, or Java) along with SQL Must have experience with Big Data technologies such as Spark, Hadoop, Hive, or other distributed systems Must have experience with data orchestration tools like Airflow Must have experience in building reliable and scalable ETL pipelines Good to have experience in data modeling Good to have exposure to building AI-led data applications/services Qualifications and Skills 26 years of professional experience in a Data Engineering role. Knowledge of distributed systems such as Hadoop, Hive, Spark, Kafka, etc. Show more Show less

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of cloud and big data platforms. Your role will involve representing the NADP SRE team, working in a dynamic environment, and providing technical leadership in defining and executing the team's technical roadmap. Collaborating with cross-functional teams, including software development, product management, customers, and security teams, is essential. Your contributions will directly impact the success of machine learning (ML) and AI initiatives by ensuring a robust and efficient platform infrastructure aligned with operational excellence. In this role, you will design, build, and optimize cloud and data infrastructure to ensure high availability, reliability, and scalability of big-data and ML/AI systems. Collaboration with cross-functional teams will be crucial in creating secure, scalable solutions that support ML/AI workloads and enhance operational efficiency through automation. Troubleshooting complex technical problems, conducting root cause analyses, and contributing to continuous improvement efforts are key responsibilities. You will lead the architectural vision, shape the team's technical strategy and roadmap, and act as a mentor and technical leader to foster a culture of engineering and operational excellence. Engaging with customers and stakeholders to understand use cases and feedback, translating them into actionable insights, and effectively influencing stakeholders at all levels are essential aspects of the role. Utilizing strong programming skills to integrate software and systems engineering, building core data platform capabilities and automation to meet enterprise customer needs, is a crucial requirement. Developing strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices is also part of the role. Qualifications for this position include 8-12 years of relevant experience and a bachelor's engineering degree in computer science or its equivalent. Candidates should have the ability to design and implement scalable solutions with a focus on streamlining operations. Strong hands-on experience in Cloud, preferably AWS, is required, along with Infrastructure as a Code skills, ideally with Terraform and EKS or Kubernetes. Proficiency in observability tools like Prometheus, Grafana, Thanos, CloudWatch, OpenTelemetry, and the ELK stack is necessary. Writing high-quality code in Python, Go, or equivalent programming languages is essential, as well as a good understanding of Unix/Linux systems, system libraries, file systems, and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure, architecting software and infrastructure at scale, and certifications in cloud and security domains are beneficial qualifications for this role. Cisco emphasizes diversity and encourages candidates to apply even if they do not meet every single qualification. Diverse perspectives and skills are valued, and Cisco believes that diverse teams are better equipped to solve problems, innovate, and create a positive impact.,

Posted 2 months ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

About Mindstix Software Labs: Mindstix accelerates digital transformation for the world's leading brands. We are a team of passionate innovators specialized in Cloud Engineering, DevOps, Data Science, and Digital Experiences. Our UX studio and modern-stack engineers deliver world-class products for our global customers that include Fortune 500 Enterprises and Silicon Valley startups. Our work impacts a diverse set of industries - eCommerce, Luxury Retail, ISV and SaaS, Consumer Tech, and Hospitality. A fast-moving open culture powered by curiosity and craftsmanship. A team committed to bold thinking and innovation at the very intersection of business, technology, and design. That's our DNA. Roles and Responsibilities: Mindstix is looking for a proficient Data Engineer. You are a collaborative person who takes pleasure in finding solutions to issues that add to the bottom line. You appreciate technical work by hand and feel a sense of ownership. You require a keen eye for detail, work experience as a data analyst, and in-depth knowledge of widely used databases and technologies for data analysis. Your responsibilities include: - Building outstanding domain-focused data solutions with internal teams, business analysts, and stakeholders. - Applying data engineering practices and standards to develop robust and maintainable solutions. - Being motivated by a fast-paced, service-oriented environment and interacting directly with clients on new features for future product releases. - Being a natural problem-solver and intellectually curious across a breadth of industries and topics. - Being acquainted with different aspects of Data Management like Data Strategy, Architecture, Governance, Data Quality, Integrity & Data Integration. - Being extremely well-versed in designing incremental and full data load techniques. Qualifications and Skills: - Bachelors or Master's degree in Computer Science, Information Technology, or allied streams. - 2+ years of hands-on experience in the data engineering domain with DWH development. - Must have experience with end-to-end data warehouse implementation on Azure or GCP. - Must have SQL and PL/SQL skills, implementing complex queries and stored procedures. - Solid understanding of DWH concepts such as OLAP, ETL/ELT, RBAC, Data Modelling, Data Driven Pipelines, Virtual Warehousing, and MPP. - Expertise in Databricks - Structured Streaming, Lakehouse Architecture, DLT, Data Modeling, Vacuum, Time Travel, Security, Monitoring, Dashboards, DBSQL, and Unit Testing. - Expertise in Snowflake - Monitoring, RBACs, Virtual Warehousing, Query Performance Tuning, and Time Travel. - Understanding of Apache Spark, Airflow, Hudi, Iceberg, Nessie, NiFi, Luigi, and Arrow (Good to have). - Strong foundations in computer science, data structures, algorithms, and programming logic. - Excellent logical reasoning and data interpretation capability. - Ability to interpret business requirements accurately. - Exposure to work with multicultural international customers. - Experience in the Retail/ Supply Chain/ CPG/ EComm/Health Industry is a plus. Who Fits Best - You are a data enthusiast and problem solver. - You are a self-motivated and fast learner with a strong sense of ownership and drive. - You enjoy working in a fast-paced creative environment. - You appreciate great design, have a strong sense of aesthetics and have a keen eye for detail. - You thrive in a customer-centric environment with the ability to actively listen, empathize and collaborate with globally distributed teams. - You are a team player who desires to mentor and inspire others to do their best. - You love expressing ideas and articulating well with strong written and verbal English communication and presentation skills. - You are detail-oriented with an appreciation for craftsmanship. Benefits: - Flexible working environment. - Competitive compensation and perks. - Health insurance coverage. - Accelerated career paths. - Rewards and recognition. - Sponsored certifications. - Global customers. - Mentorship by industry leaders. Location: This position is primarily based at our Pune (India) headquarters, requiring all potential hires to work from this location. A modern workplace is deeply collaborative by nature, while also demanding a touch of flexibility. We embrace deep collaboration at our offices with reasonable flexi-timing and hybrid options to our seasoned team members. Equal Opportunity Employer.,

Posted 2 months ago

Apply

13.0 - 20.0 years

40 - 45 Lacs

Bengaluru

Work from Office

Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.

Posted 3 months ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

Remote

About the Role The Search platform currently powers Rider and Driver Maps, Uber Eats, Groceries, Fulfilment, Freight, Customer Obsession and many such products and systems across Uber. We are building a unified platform for all of Uber's search use-cases. The team is building the platform on OpenSearch. We are already supporting in house search infrastructure built on top of Apache Lucene. Our mission is to build a fully managed search platform while delivering a delightful user experience through low-code data and control APIs . We are looking for an Engineering Manager with strong technical expertise to define a holistic vision and help builda highly scalable, reliable and secure platform for Uber's core business use-cases. Come join our team to build search functionality at Uber scale for some of the most exciting areas in the marketplace economy today. An ideal candidate will be working closely with a highly cross-functional team, including product management, engineering, tech strategy, and leadership to drive our vision and build a strong team. A successful candidate will need to demonstrate strong technical skills, system architecture / design. Having experience on the open source systems and distributed systems is a big plus for this role. The EM2 role will require building a team of software engineers, while directly contributing on the technical side too. What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Provide technical leadership, influence and partner with fellow engineers to architect, design and build infrastructure that can stand the test of scale and availability, while reducing operational overhead. Lead, manage and grow a team of software engineers. Mentor and guide the professional and technical development of engineers on your team, and continuously improve software engineering practices. Own the craftsmanship, reliability, and scalability of your solutions. Encourage innovation, implementation of ground breaking technologies, outside-of-the-box thinking, teamwork, and self-organization Hire top performing engineering talent and maintaining our dedication to diversity and inclusion Collaborate with platform, product and security engineering teams, and enable successful use of infrastructure and foundational services, and manage upstream and downstream dependencies ---- Basic Qualifications ---- Bachelor's degree (or higher) in Computer Science or related field. 10+ years of software engineering industry experience 8+ years of experience as an IC building large scale distributed software systems Outstanding technical skills in backend: Uber managers can lead from the front when the situation calls for it. 1+ years for frontline managing a diverse set of engineers ---- Preferred Qualifications ---- Prior experience with Search or big data systems - OpenSearch, Lucene, Pinot, Druid, Spark, Hive, HUDI, Iceberg, Presto, Flink, HDFS, YARN, etc preferred. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. .Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies