Jobs
Interviews

27 Druid Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About Lowes Lowes Companies, Inc. (NYSE: LOW) is a FORTUNE 50 home improvement company serving approximately 16 million customer transactions a week in the United States. With total fiscal year 2023 sales of more than $86 billion, Lowes operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Bengaluru, Lowes India develops innovative technology products and solutions and delivers business capabilities to provide the best omnichannel experience for Lowes customers. Lowes India employs over 4,200 associates across technology, analytics, merchandising, supply chain, marketing, finance and accounting, product management and shared services. Lowes India actively supports the communities it serves through programs focused on skill-building, sustainability and safe homes. For more information, visit, www.lowes.co.in. About The Team Lowe&aposs forecasting platform team is responsible for predicting future trends, outcomes, or events based on current and historical data. The primary goal is to generate AI/ML forecasts that help the business plan for future demand, optimize resources, reduce risk, and make data-driven decisions. Job Summary The primary purpose of this role is to develop an artificial intelligence (AI) platform that supports a wide array of machine learning (ML) models, including sophisticated deep learning frameworks and large language models (LLMs). This role will work on scaling model performance, building essential tools and frameworks, and managing compute and storage resources. The role involves close collaboration with cross-functional teams to identify new opportunities leveraging AI platform capabilities across different domains to accelerate AI infused product development. Roles & Responsibilities Scales the platform for high performance and integrates new AI capabilities as APIs to ensure the platform remains adaptable and efficient in hosting a variety of ML models. Designs, develops, and implements tools and frameworks that support ML experimentation and deployment. Manages GPU and CPU resources to optimize the execution of AI models to ensure the platform runs efficiently, balancing performance with cost-effectiveness. Works closely with data scientists to integrate AI models smoothly into platform. Creates and manages efficient data movement and pipelines for the AI platform to operate smoothly. Optimizes data flows to support the demands of high-velocity AI model training and inference. Analyzes platform performance metrics and user feedback to drive continuous improvement initiatives. Utilizes insights to guide platform enhancements, ensuring the AI platform remains at the forefront of technological advancements and user satisfaction. Collaborates effectively with diverse teams, integrating technical expertise with business insights and user needs. Implements security protocols and governance measure for AI platform, ensuring data integrity and compliance with industry standards and best practices. Years Of Experience 2-5 years of overall work experience in AI Engineering. Education Qualification & Certifications Bachelors Degree (Science, Technology, Engineering, Math or related field) Skill Set Required Experience in AI/ML Platform Engineering, Data, and ML Operations tools and frameworks. Experience working with GPU and CPU Infrastructure, optimizing ML models for performance. Programming experience in Python or equivalent. Experience working with Continuous Integration/Continuous Deployment tools Experience in defining technical requirements and performing high level design for complex solutions Experience in SQL and NoSQL databases, Hadoop ecosystem, Druid, Trino, Big Query, Google Vertex AI. Lowe&aposs is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits. Show more Show less

Posted 3 days ago

Apply

1.0 - 6.0 years

15 - 25 Lacs

bengaluru

Work from Office

We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of ~10ms Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. We use an in-house video streaming platform to support a wide variety of devices and networks. What Youll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What Were Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

As an exceptionally skilled individual, you will be part of a dedicated team at TNS, collaborating daily to contribute to the success of the organization. If you are driven by excellence in both professional and personal aspects, this is the place for you! The role entails being a Java and/or Scala developer with expertise in Big Data tools and frameworks. You should have 8 to 12 years of proven experience in Java and/or Scala development. Your responsibilities will include hands-on work with prominent Big Data tools like Hadoop, Spark, Map Reduce, Hive, and Impala. Additionally, you should possess a deep understanding of streaming technologies such as Kafka and/or Spark Streaming. Strong familiarity with design, development, and utilization of NoSQL databases like HBase, Druid, and Solr is crucial. Experience in working with public cloud platforms like AWS and Azure is also expected. To be considered for this position, you should hold a BS/B.E./B.Tech degree in Computer Science or a related field. Desirable qualifications for this role include proficiency in object-oriented analysis, design patterns using Java/J2EE technologies, and expertise in Restful Web Services and data modeling. Familiarity with build and development tools like Maven, Gradle, and Jenkins, as well as experience with test frameworks such as JUnit and Mockito, are advantageous. Knowledge of Spring Framework, MVC architectures, and ORM frameworks like Hibernate would be a bonus. If you have a genuine passion for technology, a thirst for personal development, and a desire for growth opportunities, we invite you to discover the exciting world of TNS!,

Posted 1 week ago

Apply

7.0 - 9.0 years

0 Lacs

delhi, india

On-site

About This Role As a Principal Software engineer you will work on complex data pipelines dealing with petabytes of data. Balbix platform is used as one of the critical security tools by the CIOs, CISOs, and the sec-ops teams of small, medium and large sized enterprises including Fortune 10 companies around the world. You will solve problems related to massive cybersecurity and IT data sets. You will collaborate closely with our data scientists, threat researchers and network experts to solve real-world problems plaguing cybersecurity. This role requires excellent algorithm, programming and testing skills as well as experience in large-scale data engineering projects. You Will Design and implement the features and own the modules for ingesting, storing and manipulating large data sets for a variety of cybersecurity use-cases Write code to provide backend support for data-driven UI widgets, web dashboards, workflows, search and API connectors Design and implement web services, rest APIs, and microservices Build production quality solutions that balance complexity and meet acceptance criteria of functional requirements Work with multiple-interfacing teams, including ML, UI, backend and data engineering You Are Driven to experience and learn more about design, architecture, and take on progressive roles Collaborative and comfortable working with across teams including data engineering, front end, product management, and DevOps Responsible and like to take ownership of challenging problems An effective communicator, including good documentation practices and articulating thought processes in a team setting Comfortable with working in an agile environment Curious about technology and the industry, and a constant learner You Have MS/BS 7+ years in Computer Science or a related field Expert programming experience with Python, Java, or Scala Good working knowledge of SQL databases such as Postgres and NoSQL databases such as MongoDB, Cassandra, Redis Experience with search engine database such as ElasticSearch is preferred Time-series databases such as InfluxDB, Druid, Prometheus Strong computer science fundamentals: data structures, algorithms, and distributed systems This role represents a unique opportunity to join a hyper-growth company in a key role where you can make a big impact on the trajectory of the company and its products alongside great professional journey. Life @ Balbix Work life at Balbix is very rewarding! We are developing the world&aposs most advanced platform to address what is perhaps the most important (and hardest) technology problem facing mankind today. Our team is collaborative, fast moving and fun-loving, a combination not always seen in cutting-edge B2B startups. Working with clarity of goals in a culture of alignment and bottom-up innovation, Balbix team members see an opportunity for rapid career growth. We encourage experimenting and continuous learning, a can-do attitude, excellence and ownership. We work hard, take great pride in our work, and have loads of fun along the way! More information at https://www.balbix.com/join-us Please reach out if you want a seat on our rocket-ship and are passionate about changing the cybersecurity equation. Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

Technology is at the core of everything done at Dream11. The technology team plays a crucial role in delivering a mobile-first experience across platforms (Android & iOS), managing over 700 million rpm (requests per minute) at peak with a user concurrency of over 16.5 million. With over 190+ micro-services written in Java and backed by a Vert.x framework, the team ensures that isolated product features with discrete architectures cater to the respective use cases efficiently. Handling terabytes of data, the infrastructure at Dream11 is built on top of technologies like Kafka, Redshift, Spark, Druid, etc., enabling a variety of use cases including Machine Learning and Predictive Analytics. The tech stack is hosted on AWS, incorporating distributed systems like Cassandra, Aerospike, Akka, Voltdb, Ignite, among others. As a part of the technology team at Dream11, your responsibilities will include working with cross-functional teams to define, design, and launch new features, designing and maintaining high-performance, reusable, and reliable code, analyzing designs for efficient development planning, as well as identifying and resolving performance bottlenecks. Qualifications for this role include a minimum of 3 years of hands-on experience with Javascript/Typescript, proficiency in React/React Native/Android/iOS Ecosystem, and strong problem-solving skills coupled with reasoning ability. Dream Sports is India's leading sports technology company with 250 million users, housing brands such as Dream11, the world's largest fantasy sports platform, FanCode, a premier sports content & commerce platform, and DreamSetGo, a sports experiences platform. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports" vision is to Make Sports Better for fans through the confluence of sports and technology. For more information about Dream Sports, visit: https://dreamsports.group/ Dream11, the flagship brand of Dream Sports, is the world's largest fantasy sports platform with 230 million users engaging in fantasy cricket, football, basketball & hockey. Dream11 has partnerships with several national & international sports bodies and cricketers, solidifying its position as a key player in the sports technology industry.,

Posted 2 weeks ago

Apply

7.0 - 9.0 years

0 Lacs

india

On-site

Technology @Dream11: Technology is at the core of everything we do. Our technology team helps us deliver a mobile-first experience across platforms (Android & iOS) while managing over 700 million rpm (requests per minute) at peak with a user concurrency of over 16.5 million. At Dream11, we have over 190+ micro-services written in Java and backed by a Vert.x framework. These work with isolated product features with discrete architectures to cater to the respective use cases. We work with terabytes of data, the infrastructure for which is built on top of Kafka, Redshift, Spark, Druid, etc. and it powers a number of use cases like Machine Learning and Predictive Analytics. Our tech stack is hosted on AWS, with distributed systems like Cassandra, Aerospike, Akka, Voltdb, Ignite, etc. We don't just create for the users of today, but are driven to innovate for the sports fans of tomorrow. If you like to build with clean, resilient, and scalable code, this is the place for you. some of our recent developments, all built with the same philosophy in mind. Your Role: Work with stakeholders, provide updates to leadership, and lead strategic engineering initiatives across Technology Be part of a cross-functional, self-sustaining team that manage products and systems from design to deployment Collaborate effectively with internal and cross-functional teams on a daily basis Tackle real business challenges by building and optimising high-scale, distributed microservices Own system architecture to ensure scalability, reliability, and performance Drive code and design quality through regular reviews and development standards Hire, mentor, and grow a high-performing engineering team while overseeing project execution Strong system design skills with a deep understanding of distributed systems and microservice architecture, backed by analytical and problem-solving abilities. Committed to best-in-class operability standards Hands-on experience with web frameworks, relational and NoSQL databases, and big data technologies such as Spark, Cassandra, and Ignite Qualifiers: 7+ years of hands-on experience in any typed language, preferably Java Leadership experience in hiring people, building teams, and people management Experience handling a variety of stakeholders across different verticals About Dream Sports: is India's leading sports technology company with 280 million+ users, housing brands such as , the world's largest fantasy sports platform, , a premier sports content & commerce platform and , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 Sportans. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports vision is to Make Sports Better for fans through the confluence of sports and technology. is the world's largest fantasy sports platform with 260 million+ users playing fantasy cricket, football, kabaddi, basketball, hockey, volleyball, handball, rugby, futsal, American football & baseball, on it. Dream11 is the flagship brand of Dream Sports, India's leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Checked out yet Head over to our official blog to get a glimpse into our culture, and how we Make Sports Better, together.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

30 - 37 Lacs

hyderabad

Work from Office

About Zscaler Serving thousands of enterprise customers around the world including 45% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced Senior Staff Engineer to join our ZPA team. Reporting to the Senior Director, you'll be responsible for: Understanding how to build and operate high scale systems Providing product wide architectural guidance, and making impactful changes What We're Looking for (Minimum Qualifications) 10+ years of experience in Java coding in a highly distributed and enterprise-scale environment Experience being on call and dealing with cloud incidents and writing RCAs Working knowledge of cloud infrastructure services on AWS/Azure Good experience in Kafka, Druid, ElasticSearch Bachelor Degree/or Masters Degree in computer science or equivalent experience What Will Make You Stand Out (Preferred Qualifications) Experience building full CI/CD systems leveraging Kubernetes & web service frameworks Experiences building reliable and extensible data tiers for large scale web services (Postgres and Redis) Experience with identity and access management systems like Okta, SAML protocols, and security services Knowledge of OAuth #LI-Hybrid #LI-AN4 At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! Learn more about Zscaler’s Future of Work strategy, hybrid working model, and benefits here. By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At NetApp, we have a history of helping customers turn challenges into business opportunities. We bring new thinking to age-old problems, like how to use data most effectively in the most efficient possible way. As an Engineer with NetApp, you'll have the opportunity to work with modern cloud and container orchestration technologies in a production setting. You'll play an important role in scaling systems sustainably through automation and evolving them by pushing for changes to improve reliability and velocity. NetApp is the intelligent data infrastructure company, turning a world of disruption into opportunity for every customer. No matter the data type, workload, or environment, we help our customers identify and realize new business possibilities. And it all starts with our people. As a Software Engineer at NetApp India's R&D division, you'll be responsible for the design, development, and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark, and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this actionable intelligence. Your responsibilities will include designing and building our Big Data Platform, understanding scale, performance, and fault-tolerance. You will interact with Active IQ engineering teams globally to leverage expertise and contribute to the tech community. Additionally, you will identify the right tools to deliver product features, work on technologies related to NoSQL, SQL, and in-memory databases, and conduct code reviews to ensure code quality and best practices adherence. Technical Skills: - Big Data hands-on development experience is required. - Demonstrate expertise in Data Engineering and complex data pipeline development. - Design, develop, implement, and tune distributed data processing pipelines focusing on scalability, low-latency, and fault-tolerance. - Awareness of Data Governance and experience with Python, Java, Scala, Kafka, Storm, Druid, Cassandra, or Presto is advantageous. Education: - Minimum 5 years of experience required, 5-8 years preferred. - Bachelor of Science Degree in Electrical Engineering or Computer Science, or equivalent experience required. At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process. NetApp offers a healthy work-life balance, comprehensive benefits, professional and personal growth opportunities, and a supportive work environment. If you are passionate about building knowledge and solving big problems, we invite you to join us in this journey.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a passionate full-stack developer, you will have the opportunity to join a team that is creating a SaaS platform for Enterprise Cloud. In this role, you will be responsible for designing and implementing next-gen multi-cloud features in a fast-paced, agile environment. Nutanix Cloud Manager (NCM) Cost Governance, a SaaS offering by Nutanix, aims to provide organizations with visibility into their hybrid multi-cloud spending. The team led by the Manager of Engineering at NCM - Cost Governance is committed to making fin-ops easier for end-users and offering insights on running an optimized infrastructure. You will work alongside a team that is eager to build a fin-ops Platform as part of Nutanix's vision to simplify Multi-Cloud & Hybrid-Cloud management. The team values self-initiative, ownership, and enthusiasm in building exceptional products. Your role will involve being part of the development team to build web-scale SaaS products, with a focus on application development using Java & JavaScript. You will translate requirements into design specifications, implement new features, troubleshoot and resolve issues, mentor junior developers/interns, and enhance performance and scalability of internal components. To excel in this role, you should bring 3-4 years of software development experience, hands-on expertise in backend using Java (Spring/Spring Boot framework) and frontend using JavaScript (Angular, React, or Vue framework), along with proficiency in version control / DevOps tools. Additionally, knowledge of SQL or NoSQL databases, strong problem-solving skills, and a willingness to learn new technologies are essential. A background in computer science or a related field is preferred. Desirable skills include hands-on experience with Python, Go, knowledge of web application security, and development experience in building distributed systems/micro-services on public/private clouds. Familiarity with distributed data management concepts and design/implementation trade-offs in building high-performance & fault-tolerant distributed systems is a plus. This role offers a hybrid work environment, combining remote work benefits with in-person collaboration. Most roles will require a minimum of 3 days per week in the office, with specific guidance provided by your manager based on team requirements.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You will be joining a Bangalore/ San Francisco based networking startup that is focused on enhancing network observability and co-pilot systems to increase network reliability and decrease response time for customers. The founding team has a combined 45 years of experience in the networking industry. In this role as a Web Backend Engineer - SDE-2, you will be instrumental in the design, development, and maintenance of the back-end systems and APIs that drive our network observability and co-pilot platform. Your responsibilities will include creating scalable, secure, and high-performance web services that meet the demanding needs of enterprise clients. Your key responsibilities will involve designing and developing robust and scalable back-end APIs with low latency response time by utilizing appropriate technologies. You will also be implementing enterprise-grade authentication and authorization mechanisms to ensure platform security and integration with enterprise clients. Additionally, you will work on integrating all APIs with an API Gateway to enforce security policies, manage traffic, monitor performance, and maintain fine-grained control. Furthermore, you will be responsible for ensuring compliance with third-party audits (SOC2, ISO 27001), implementing security best practices, and designing back-end systems suitable for deployment using CI/CD pipelines to facilitate smooth updates and feature deployment. Utilizing Application Performance Monitoring (APM), you will analyze performance insights, identify bottlenecks, and implement optimizations proactively. It will also be your duty to design and implement access controls and data protection mechanisms to safeguard customer data and ensure regulatory compliance. Moreover, as part of your role, you will mentor and guide junior engineers, conduct code reviews, and contribute to the growth of the team. To be successful in this role, you should hold a Bachelor's or Master's degree in Computer Science or a related field and possess 4 to 7 years of experience in building scalable back-end web services. You should have a strong command of at least one major back-end programming language (such as Python, Java, Go, Rust) and one or more web frameworks. Experience with RESTful or GraphQL, gRPC, enterprise-grade authentication and authorization mechanisms, API Gateways, security protocols, CI/CD tools, monitoring systems, and database systems is essential. Additionally, knowledge of architectural design patterns, domain-driven design, micro-services, and excellent problem-solving and analytical skills will be beneficial for this role.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You will play a crucial role as a Web Backend Engineer - SDE-2 in our Bangalore/ San Francisco based networking startup. Your main responsibility will be designing, developing, and maintaining the back-end systems and APIs that power our network observability and co-pilot platform. You will need to ensure that the web services you build are scalable, secure, and high-performance to meet the needs of enterprise customers. Your key responsibilities will include designing and implementing robust and scalable back-end APIs with low latency response time using appropriate technologies. You will also be in charge of implementing enterprise-grade authentication and authorization mechanisms to ensure platform security and seamless adoption by enterprise clients. Additionally, you will need to integrate all APIs with an API Gateway to enforce security policies, manage traffic, monitor performance, and ensure fine-grained control. Another important aspect of your role will be ensuring compliance with third-party audits (SOC2, ISO 27001) and implementing security best practices (OWASP Top 10). You will design and implement a back-end system that can be deployed using CI/CD pipelines to enable seamless updates and deployment of new features with minimal disruption. Using Application Performance Monitoring (APM), you will analyze performance insights, identify bottlenecks, and implement necessary optimizations proactively. Moreover, you will design and implement proper access controls and data protection mechanisms to safeguard customer data and ensure compliance with relevant regulations. As a senior member of the team, you will also mentor and guide junior engineers and conduct code reviews. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science or a related field with 4 to 7 years of experience in building scalable back-end web services. You must possess strong proficiency in at least one major back-end programming language (e.g., Python, Java, Go, Rust) and one or more web frameworks. Experience with building and consuming RESTful or GraphQL, gRPC, and implementing enterprise-grade authentication and authorization mechanisms is required. Hands-on experience with API Gateways, a strong grasp of security protocols, CI/CD tools, and monitoring systems, as well as knowledge of database systems and data modeling are also essential. A solid understanding of architectural design patterns, domain-driven design, micro-services, along with excellent problem-solving and analytical skills will be beneficial for this role.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Are you ready to power the world's connections If you don't think you meet all of the criteria below but are still interested in the job, please apply. Nobody checks every box - we're looking for candidates who are particularly strong in a few areas and have some interest and capabilities in others. Design, develop, and maintain microservices that power Kong Konnect, the Service Connectivity Platform. Working closely with Product Management and teams across Engineering, you will develop software that has a direct impact on our customers" business and Kong's success. This opportunity is hybrid (Bangalore Based) with 3 days in the office and 2 days work from home. Implement, and maintain services that power high bandwidth logging and tracing services for our cloud platform such as indexing and searching logs and traces of API requests powered by Kong Gateway and Kuma Service Mesh. Implement efficient solutions at scale using distributed and multi-tenant cloud storage and streaming systems. Implement cloud systems that are resilient to regional and zonal outages. Participate in an on-call rotation to support services in production, ensuring high performance and reliability. Write and maintain automated tests to ensure code integrity and prevent regressions. Mentor other team members. Undertake additional tasks as assigned by the manager. 5+ years working in a team to develop, deliver, and maintain complex software solutions. Experience in log ingestion, indexing, and search at scale. Excellent verbal and written communication skills. Proficiency with OpenSearch/Elasticsearch and other full-text search engines. Experience with streaming platforms such as Kafka, AWS Kinesis, etc. Operational experience in running large-scale, high-performance internet services, including on-call responsibilities. Experience with JVM and languages such as Java and Scala. Experience with AWS and cloud platforms for SaaS teams. Experience designing, prototyping, building, monitoring, and debugging microservices architectures and distributed systems. Understanding of cloud-native systems like Kubernetes, Gitops, and Terraform. Bachelors or Masters degree in Computer Science. Bonus points if you have experience with columnar stores like Druid/Clickhouse/Pinot, working on new products/startups, contributing to Open Source Software projects, or working or developing L4/L7 proxies such as Nginx, HA-proxy, Envoy, etc. Kong is THE cloud native API platform with the fastest, most adopted API gateway in the world (over 300m downloads!). Loved by developers and trusted with enterprises" most critical traffic volumes, Kong helps startups and Fortune 500 companies build with confidence allowing them to bring solutions to market faster with API and service connectivity that scales easily and securely. 83% of web traffic today is API calls! APIs are the connective tissue of the cloud and the underlying technology that allows software to talk and interact with one another. Therefore, we believe that APIs act as the nervous system of the cloud. Our audacious mission is to build the nervous system that will safely and reliably connect all of humankind! For more information about Kong, please visit konghq.com or follow @thekonginc on Twitter.,

Posted 1 month ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we&aposve set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you&aposll make a valuable - and valued - contribution. We&aposre a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform Are you an expert with Big Data Technologies Have you looked under the hood of these systems Are you interested in Open Source If you answered Yes to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It&aposs important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company&aposs success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We&aposre independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you&aposll be part of a company that&aposs changing how the world watches TV.? We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn&apost real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002.? To learn more about Roku, our global footprint, and how we&aposve grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less

Posted 1 month ago

Apply

4.0 - 8.0 years

5 - 10 Lacs

Bengaluru, Karnataka, India

On-site

Job description As a Software Engineer, you will work closely with cross-functional teams to understand business requirements, design scalable solutions, and ensure the integrity and availability of our data. The ideal candidate will have a deep understanding of cloud technologies, UI technologies, software engineering best practices, and a proven track record of successfully delivering complex projects.- Lead the design and implementation of cloud-based data architectures.- Collaborate with data scientists, analysts, and business stakeholders to understand requirements.- Stay current with industry trends and emerging technologies in cloud engineering. B.Tech. Degree in computer science or equivalent field Hands-on programming experience Experience with React frontend framework, deep understanding of React.js, and Redux Proficient in programming languages such as Python, Java, Scala, GoLang, JavaScript Proficiency in cloud services such as AWS, Azure, or Google Cloud Expertise in building UI and data integration services Experience with streaming UI technologies Experience building data streaming solutions using Apache Spark/ Apache Storm/ Flink /Flume Preferred Qualifications Knowledge of data warehouse solutions (Redshift, BigQuery, Snowflake, Druid) Certification in cloud platforms Knowledge of machine learning and data science concepts Contributions to the open source community

Posted 1 month ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”. Job Requirements Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Work on technologies related to NoSQL, SQL and in-memory databases • Conduct code reviews to ensure code quality, consistency and best practices adherence. Technical Skills • Big Data hands-on development experience is required. • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built. • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala. • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage. Education • A minimum of 5 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a senior data engineer, you will be responsible for working on complex data pipelines dealing with petabytes of data. The Balbix platform serves as a critical security tool for CIOs, CISOs, and sec-ops teams of small, medium, and large enterprises globally, including Fortune 10 companies. Your role will involve solving challenges related to massive cybersecurity and IT data sets by collaborating closely with data scientists, threat researchers, and network experts to address real-world cybersecurity issues. To excel in this role, you must possess excellent algorithm, programming, and testing skills gained from experience in large-scale data engineering projects. Your primary responsibilities will include designing and implementing features, along with taking ownership of modules for ingesting, storing, and manipulating large data sets to cater to various cybersecurity use-cases. You will also be tasked with writing code to provide backend support for data-driven UI widgets, web dashboards, workflows, search functionalities, and API connectors. Additionally, designing and implementing web services, REST APIs, and microservices will be part of your routine tasks. Your aim should be to build high-quality solutions that strike a balance between complexity and meeting functional requirements" acceptance criteria. Collaboration with multiple teams, including ML, UI, backend, and data engineering, will also be essential for success in this role. To thrive in this position, you should be driven to seek new experiences, learn about design and architecture, and be open to taking on progressive roles within the organization. Your ability to collaborate effectively across teams, such as data engineering, front end, product management, and DevOps, will be crucial. Being responsible and willing to take ownership of challenging problems is a key trait expected from you. Strong communication skills, encompassing good documentation practices and the ability to articulate thought processes in a team setting, will be essential. Moreover, you should feel comfortable working in an agile environment and exhibit curiosity about technology and the industry, demonstrating a willingness to continuously learn and grow. Qualifications for this role include a MS/BS degree in Computer Science or a related field with a minimum of three years of experience. You should possess expert programming skills in Python, Java, or Scala, along with a good working knowledge of SQL databases like Postgres and NoSQL databases such as MongoDB, Cassandra, and Redis. Experience with search engine databases like ElasticSearch is preferred, as well as familiarity with time-series databases like InfluxDB, Druid, and Prometheus. Strong fundamentals in computer science, including data structures, algorithms, and distributed systems, will be advantageous for fulfilling the requirements of this role.,

Posted 2 months ago

Apply

5.0 - 9.0 years

17 - 27 Lacs

Pune

Hybrid

Job Summary : We are looking for a Senior Software Engineer, who will play a key role in building and enhancing the technology that powers our flight booking platform. Youll work on challenges such as dynamic pricing, booking reliability, and third-party airline integrations. Were looking for someone who combines strong software engineering fundamentals with domain knowledgeor a passion to learnabout the travel and aviation industry. If you thrive in a fast-paced environment and love building high-impact systems that serve thousands of users daily, we would love to meet you. Job Responsibilities : B.E/B.Tech in Computer Science or a related subject. 5+ years of experience in software development, ideally in high-scale environments Proficient in C# dot net core, experience with microservices and RESTful APIs. Write automated tests to ensure code quality and stability Strong troubleshooting and problem-solving skills. Troubleshoot problems with 3rd party integrations & provide solutions in a fast-paced environment Collaborate with product managers, architects, and other engineers to deliver high-quality features Lead by example in code reviews, design, architecture discussions and mentoring juniors Participate in Incident management and solve production issues with sense of urgency and ownership Job Requirement : Experience in Agile software development. Experience in travel or airline booking systems is a big plus Understanding of airline industry protocols (GDS, NDC, IATA standards) is a strong advantage Understanding of tools like ELK stack, Grafana, Druid etc would be an added advantage. Familiarity with CI/CD, containerization etc Experience with any cloud platform (GCP is a plus, AWS, Azure etc) Experience in a small team handling while working as an individual contributor Interested candidates can email their CV to charmi.kapadia@tripstack.com Reach out : 9326635865

Posted 2 months ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

About the Team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. Weve done this with zero downtime! ?? Sounds impossible? Well, thats the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. Weve taken steps to inculcate a strong Founders Mindset across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As Engineering Manager, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we arent building unparalleled tech solutions, you can find us debating the plot points of our favourite books and games or even gossipping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the Role We are looking for a seasoned Engineering Manager well-versed with emerging technologies to join our team. As an Engineering Manager, you will ensure consistency and quality by shaping the right strategies. You will keep an eye on all engineering projects and ensure all duties are fulfilled. You will analyse other employees tasks and carry on collaborations effectively. You will also transform newbies into experts and build reports on the progress of all projects. What you will do Design tasks for other engineers, keeping Meeshos guidelines and standards in mind Keep a close look on various projects and monitor the progress Drive excellence in quality across the organisation and solutioning of product problems Collaborate with the sales and design teams to create new products Manage engineers and take ownership of the project while ensuring product scalability Conduct regular meetings to plan and develop reports on the progress of projects What you will need Bachelor's / Masters in computer science At least 8+ years of professional experience At least 4+ years experience in managing software development teams Experience in building large-scale distributed Systems Experience in Scalable platforms Expertise in Java/Python/Go-Lang and multithreading Good understanding on Spark and internals Deep understanding of transactional and NoSQL DBs Deep understanding of Messaging systems Kafka Good experience on cloud infrastructure - AWS preferably Ability to drive sprints and OKRs with good stakeholder management experience. Exceptional team managing skills Experience in managing a team of 4-5 junior engineers Good understanding on Streaming and real time pipelines Good understanding on Data modelling concepts, Data Quality tools Good knowledge in Business Intelligence tools Metabase, Superset, Tableau etc. Good to have knowledge - Trino, Flink, Presto, Druid, Pinot etc. Good to have knowledge - Data pipeline building

Posted 2 months ago

Apply

6.0 - 9.0 years

15 - 21 Lacs

Hyderabad

Work from Office

About Zscaler Serving thousands of enterprise customers around the world including 45% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We are seeking a visionary and dynamic Staff Engineer to join our ZPA ControlPath team. You will be reporting to the Senior Manager, you'll be responsible for: Working closely with Principal Engineers, collaborating on architectural changes, product designs thereby making impactful changes Building and operating high scale systems Overseeing the software development lifecycle to deliver quality products and aligning technical solutions with product and user needs What We're Looking for (Minimum Qualifications) 6+ years of experience in Java coding in a highly distributed and enterprise-scale environment Experience being oncall and dealing with cloud incidents and writing RCAs Working knowledge of cloud infrastructure services on AWS/Azure Great mentor and coach Bachelor Degree/or Masters Degree in computer science or equivalent experience What Will Make You Stand Out (Preferred Qualifications) Experience building full CI/CD systems leveraging Kubernetes for Microservices Experiences building reliable and extensible data tiers for large scale web services (Postgres and Redis) Experience building web service frameworks with Logging and Analytics experience with Druid, Kafka, Opensearch #LI-Hybrid #LI-AC10 At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.

Posted 2 months ago

Apply

8.0 - 13.0 years

40 - 65 Lacs

Bengaluru

Work from Office

About the team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. We’ve done this – with zero downtime! Sounds impossible? Well, that’s the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. We’ve taken steps to inculcate a strong ‘Founder’s Mindset’ across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As Engineering Manager, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we aren’t building unparalleled tech solutions, you can find us debating the plot points of our favourite books and games – or even gossipping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the role We are looking for a seasoned Engineering Manager well-versed with emerging technologies to join our team. As an Engineering Manager, you will ensure consistency and quality by shaping the right strategies. You will keep an eye on all engineering projects and ensure all duties are fulfilled. You will analyse other employees’ tasks and carry on collaborations effectively. You will also transform newbies into experts and build reports on the progress of all projects What you will do Design tasks for other engineers, keeping Meesho’s guidelines and standards in mind Keep a close look on various projects and monitor the progress Drive excellence in quality across the organisation and solutioning of product problems Collaborate with the sales and design teams to create new products Manage engineers and take ownership of the project while ensuring product scalability Conduct regular meetings to plan and develop reports on the progress of projects What you will need Bachelor's / Master’s in computer science At least 8+ years of professional experience At least 4+ years’ experience in managing software development teams Experience in building large-scale distributed Systems Experience in Scalable platforms Expertise in Java/Python/Go-Lang and multithreading Good understanding on Spark and internals Deep understanding of transactional and NoSQL DBs Deep understanding of Messaging systems – Kafka Good experience on cloud infrastructure - AWS preferably Ability to drive sprints and OKRs with good stakeholder management experience. Exceptional team managing skills Experience in managing a team of 4-5 junior engineers Good understanding on Streaming and real time pipelines Good understanding on Data modelling concepts, Data Quality tools Good knowledge in Business Intelligence tools Metabase, Superset, Tableau etc. Good to have knowledge - Trino, Flink, Presto, Druid, Pinot etc. Good to have knowledge - Data pipeline building

Posted 3 months ago

Apply

3.0 - 8.0 years

3 - 6 Lacs

Pune, Bengaluru

Work from Office

We are seeking a skilled and experienced Druid Developer to design, develop, and maintain real-time data analytics solutions using Apache Druid. The ideal candidate will have hands-on experience working with Druid, a deep understanding of distributed systems, and a passion for processing large-scale datasets. You will play a pivotal role in creating scalable, high-performance systems that enable real-time decision-making. Technical Skills: Strong experience with Apache Druid, including ingestion, query optimizations, and cluster management. Proficiency in real-time data streaming technologies (e.g., Apache Kafka, AWS Kinesis). Experience with data transformation and ETL processes. Knowledge of relational and NoSQL databases (e.g., PostgreSQL, MongoDB). Hands-on experience with cloud platforms (AWS, GCP, Azure) for deploying Druid clusters. Proficiency in programming languages like Java, Python, or Scala. Familiarity with containerization tools like Docker and orchestration tools like Kubernetes.

Posted 3 months ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

Remote

About the Role The Search platform currently powers Rider and Driver Maps, Uber Eats, Groceries, Fulfilment, Freight, Customer Obsession and many such products and systems across Uber. We are building a unified platform for all of Uber's search use-cases. The team is building the platform on OpenSearch. We are already supporting in house search infrastructure built on top of Apache Lucene. Our mission is to build a fully managed search platform while delivering a delightful user experience through low-code data and control APIs . We are looking for an Engineering Manager with strong technical expertise to define a holistic vision and help builda highly scalable, reliable and secure platform for Uber's core business use-cases. Come join our team to build search functionality at Uber scale for some of the most exciting areas in the marketplace economy today. An ideal candidate will be working closely with a highly cross-functional team, including product management, engineering, tech strategy, and leadership to drive our vision and build a strong team. A successful candidate will need to demonstrate strong technical skills, system architecture / design. Having experience on the open source systems and distributed systems is a big plus for this role. The EM2 role will require building a team of software engineers, while directly contributing on the technical side too. What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Provide technical leadership, influence and partner with fellow engineers to architect, design and build infrastructure that can stand the test of scale and availability, while reducing operational overhead. Lead, manage and grow a team of software engineers. Mentor and guide the professional and technical development of engineers on your team, and continuously improve software engineering practices. Own the craftsmanship, reliability, and scalability of your solutions. Encourage innovation, implementation of ground breaking technologies, outside-of-the-box thinking, teamwork, and self-organization Hire top performing engineering talent and maintaining our dedication to diversity and inclusion Collaborate with platform, product and security engineering teams, and enable successful use of infrastructure and foundational services, and manage upstream and downstream dependencies ---- Basic Qualifications ---- Bachelor's degree (or higher) in Computer Science or related field. 10+ years of software engineering industry experience 8+ years of experience as an IC building large scale distributed software systems Outstanding technical skills in backend: Uber managers can lead from the front when the situation calls for it. 1+ years for frontline managing a diverse set of engineers ---- Preferred Qualifications ---- Prior experience with Search or big data systems - OpenSearch, Lucene, Pinot, Druid, Spark, Hive, HUDI, Iceberg, Presto, Flink, HDFS, YARN, etc preferred. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. .Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .

Posted 3 months ago

Apply

5 - 9 years

17 - 27 Lacs

Pune

Hybrid

Job Summary : We are looking for a Senior Software Engineer, who will play a key role in building and enhancing the technology that powers our flight booking platform. Youll work on challenges such as dynamic pricing, booking reliability, and third-party airline integrations. Were looking for someone who combines strong software engineering fundamentals with domain knowledgeor a passion to learnabout the travel and aviation industry. If you thrive in a fast-paced environment and love building high-impact systems that serve thousands of users daily, we would love to meet you. Job Responsibilities : B.E/B.Tech in Computer Science or a related subject. 6+ years of experience in software development, ideally in high-scale environments Proficient in C# dot net core, experience with microservices and RESTful APIs. Write automated tests to ensure code quality and stability Strong troubleshooting and problem-solving skills. Troubleshoot problems with 3rd party integrations & provide solutions in a fast-paced environment Collaborate with product managers, architects, and other engineers to deliver high-quality features Lead by example in code reviews, design, architecture discussions and mentoring juniors Participate in Incident management and solve production issues with sense of urgency and ownership Job Requirement : Experience in Agile software development. Experience in travel or airline booking systems is a big plus Understanding of airline industry protocols (GDS, NDC, IATA standards) is a strong advantage Understanding of tools like ELK stack, Grafana, Druid etc would be an added advantage. Familiarity with CI/CD, containerization etc Experience with any cloud platform (GCP is a plus, AWS, Azure etc) Experience in a small team handling while working as an individual contributor Interested candidates can email their CV to charmi.kapadia@tripstack.com Reach out : 9326635865

Posted 4 months ago

Apply

1.0 - 4.0 years

9 - 10 Lacs

bengaluru

Work from Office

Responsibilities : As an integral part of the Data Platform team, take ownership of multiple modules from design to deployment. Extensively build scalable, high-performance distributed systems that deal with large data volumes. Provide resolutions and/ or workaround to data pipeline related queries/ issues as appropriate Ensure that Ingestion pipelines that empower the Data Lake and Data Warehouses are up and running. Collaborate with different teams in order to understand / resolve data availability and consistency issues. Exhibit continuous improvement on problem resolution skills and strive for excellence What are we looking for ? Overall 1-3 years of experience in the software industry with minimum 2.5 years on Big Data and related tech stacks. Preferably from ecommerce companies. Strong Java core programming skills. Good to have programming skills in Java/Scala. Good design and documentation skills. Would have worked on Data at scale. Ability to read and write SQL - and understanding of one of Relational Databases such as MySQL, Oracle, Postgres, SQL Server. Development experience using Hadoop, Spark, Kafka, Map-Reduce, Hive, and any NoSQL databases like HBase. Exposure to tech stacks like Flink, Druid, etc. Prior exposure to building real time data pipelines would be an added advantage. Comfortable with Linux with ability to write small scripts in Bash/Python. Ability to grapple with log files and unix processes. Prior experience in working on cloud services, preferably AWS. Ability to learn complex new things quickly

Posted Date not available

Apply

1.0 - 6.0 years

3 - 8 Lacs

bengaluru

Work from Office

We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of ~10ms Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. We use an in-house video streaming platform to support a wide variety of devices and networks. What Youll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What Were Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture

Posted Date not available

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies