Jobs
Interviews

4 Apache Druid Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

20 - 25 Lacs

Pune

Work from Office

Job Purpose VI is looking for an experienced Apache Druid Developer to join our data engineering team. The ideal candidate will have a deep understanding of real-time data ingestion, processing, and querying in large-scale analytics environments. He will be responsible for designing, implementing, and optimizing data pipelines in Apache Druid to support real-time data analytics and drive business insights. Key Result Areas/Accountabilities Data Ingestion Pipelines : Design and implement data ingestion workflows in Apache Druid, including real-time and batch data ingestion. Query Optimization : Develop optimized Druid queries and leverage Druids indexing and storage capabilities to ensure low-latency, high-performance analytics. Data Modeling : Create and maintain schemas optimized for time-series data analysis, supporting aggregation, filtering, and complex analytical functions. Cluster Management : Deploy, configure, and manage Druid clusters, monitoring performance, reliability, and cost-effectiveness. Data Integration : Collaborate with other teams to integrate Druid with data sources (e.g., Kafka, Hadoop, S3) and downstream applications. Performance Monitoring & Tuning : Continuously monitor cluster performance, fine-tune data configurations, and troubleshoot any issues that may impact availability and response times. Core Competencies, Knowledge, Experience In-depth experience with Apache Druid (setup, configuration, tuning, and operations). Strong knowledge of SQL and familiarity with Druid SQL . Experience with data ingestion and ETL pipelines , especially with Kafka, Hadoop, Spark, and other data sources. Proficiency in Java, Python , or other programming languages for custom data processing and integration. Familiarity with distributed data systems and big data frameworks (e.g., Hadoop, Apache Kafka, Apache Spark). Must have technical / professional qualifications Bachelor’s degree in Computer Science, with 6+ experience in Data Science, Engineering, or a related field. Experience with data visualization tools (e.g., Superset, Tableau, Looker) and integration with Druid Experience with other analytics databases (e.g., ClickHouse, Snowflake, BigQuery).

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Data Engineer at Blis, you will be part of a globally recognized and award-winning team that specializes in big data analytics and advertising. We collaborate with iconic brands like McDonald's, Samsung, and Mercedes Benz, providing precise audience insights to help them target their ideal customers effectively. Upholding ethical data practices and privacy rights is at the core of our operations, and we are committed to ensuring outstanding performance and reliability in all our systems. Working at Blis means being part of an international company with a diverse culture, spanning across four continents and comprising over 300 team members. Headquartered in the UK, we are financially successful and poised for continued growth, offering you an exciting opportunity to contribute to our journey. Your primary responsibility as a Data Engineer will involve designing and implementing high-performance data pipelines on Google Cloud Platform (GCP) to handle massive amounts of data efficiently. With a focus on scalability and automation, you will play a crucial role in building secure pipelines that can process over 350GB of data per hour and respond to 400,000 decision requests each second. Your expertise will be instrumental in driving improvements in data architecture, optimizing resource utilization, and delivering fast, accurate insights to stakeholders. Collaboration is key at Blis, and you will work closely with product and engineering teams to ensure that our data infrastructure evolves to support new initiatives seamlessly. Additionally, you will mentor and support team members, fostering a collaborative environment that encourages knowledge sharing, innovation, and professional growth. To excel in this role, you should have at least 5 years of hands-on experience with large-scale data systems, with a strong focus on designing and maintaining efficient data pipelines. Proficiency in Apache Druid and Imply platforms, along with expertise in cloud-based services like GCP, is essential. You should also have a solid understanding of Python for building and optimizing data flows, as well as experience with data governance and quality assurance practices. Furthermore, familiarity with event-driven architectures, tools like Apache Airflow, and distributed processing frameworks such as Spark will be beneficial. Your ability to apply complex algorithms and statistical techniques to large datasets, along with experience in working with relational databases and non-interactive reporting solutions, will be valuable assets in this role. Joining the Blis team means engaging in high-impact work in a data-intensive environment, collaborating with brilliant engineers, and being part of an innovative culture that prioritizes client obsession and agility. With a global reach and a commitment to diversity and inclusion, Blis offers a dynamic work environment where your contributions can make a tangible difference in the world of advertising technology.,

Posted 3 weeks ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Pune

Work from Office

Job Purpose VI is looking for an experienced Apache Druid Developer to join our data engineering team. The ideal candidate will have a deep understanding of real-time data ingestion, processing, and querying in large-scale analytics environments. He will be responsible for designing, implementing, and optimizing data pipelines in Apache Druid to support real-time data analytics and drive business insights. Key Result Areas/Accountabilities Data Ingestion Pipelines : Design and implement data ingestion workflows in Apache Druid, including real-time and batch data ingestion. Query Optimization : Develop optimized Druid queries and leverage Druids indexing and storage capabilities to ensure low-latency, high-performance analytics. Data Modeling : Create and maintain schemas optimized for time-series data analysis, supporting aggregation, filtering, and complex analytical functions. Cluster Management : Deploy, configure, and manage Druid clusters, monitoring performance, reliability, and cost-effectiveness. Data Integration : Collaborate with other teams to integrate Druid with data sources (e.g., Kafka, Hadoop, S3) and downstream applications. Performance Monitoring & Tuning : Continuously monitor cluster performance, fine-tune data configurations, and troubleshoot any issues that may impact availability and response times. Core Competencies, Knowledge, Experience In-depth experience with Apache Druid (setup, configuration, tuning, and operations). Strong knowledge of SQL and familiarity with Druid SQL . Experience with data ingestion and ETL pipelines , especially with Kafka, Hadoop, Spark, and other data sources. Proficiency in Java, Python , or other programming languages for custom data processing and integration. Familiarity with distributed data systems and big data frameworks (e.g., Hadoop, Apache Kafka, Apache Spark). Must have technical / professional qualifications Bachelor’s degree in Computer Science, with 6+ experience in Data Science, Engineering, or a related field. Experience with data visualization tools (e.g., Superset, Tableau, Looker) and integration with Druid Experience with other analytics databases (e.g., ClickHouse, Snowflake, BigQuery).

Posted 1 month ago

Apply

4 - 9 years

16 - 25 Lacs

Chennai

Work from Office

Job Summary: We are seeking a skilled Java Developer with experience in Drools (business rules management system) or Apache Druid (real-time analytics database) to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining robust, scalable Java-based applications, with a focus on rule engines or real-time data processing. Responsibilities: Design, develop, test, and deploy Java applications and services. Integrate and configure Drools for rule-based processing, or Apache Druid for real-time analytics (depending on skillset). Collaborate with cross-functional teams to define technical solutions and requirements. Write clean, maintainable, and efficient code following best practices. Monitor and troubleshoot application performance and reliability issues. Develop unit and integration tests to ensure software quality. Maintain technical documentation for systems and processes. Required Skills and Qualifications: 3+ years of experience in Java development. Hands-on experience with Drools or Apache Druid (at least one is mandatory). Strong understanding of object-oriented design and design patterns. Experience with Spring/Spring Boot frameworks. Familiarity with RESTful APIs and microservices architecture. Knowledge of relational databases (e.g., MySQL, PostgreSQL) and NoSQL stores. Experience with build tools (Maven, Gradle) and version control systems (Git). Excellent problem-solving and debugging skills. Strong communication and teamwork abilities. Preferred Qualifications: Experience working in agile development environments. Familiarity with containerization tools (Docker, Kubernetes). Exposure to CI/CD pipelines. Knowledge of cloud platforms (AWS, GCP, or Azure)

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies