Jobs
Interviews

100 Clickhouse Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

5 - 7 Lacs

kolkata, west bengal, india

On-site

We are seeking a skilled Data Engineer with strong hands-on experience in ClickHouse, Kubernetes, SQL, and Python . You will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. This role requires expertise in data modeling, container orchestration, and collaboration with data scientists and analysts to ensure data quality and meet business needs. Roles & Responsibilities: Design, build, and maintain scalable and efficient data pipelines and ETL processes . Develop and optimize ClickHouse databases for high-performance analytics, including data modeling, query optimization, and performance tuning. Create RESTful APIs using FastAPI to expose data services. Work with Kubernetes for container orchestration and deployment of data services. Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and ClickHouse . Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. Monitor, troubleshoot, and improve the performance of data infrastructure. Skills Required: Strong experience in ClickHouse (data modeling, query optimization, performance tuning). Expertise in SQL , including complex joins, window functions, and optimization. Proficient in Python , especially for data processing ( Pandas, NumPy ) and scripting. Experience with FastAPI for creating lightweight APIs and microservices. Hands-on experience with PostgreSQL (schema design, indexing, and performance). Solid knowledge of Kubernetes (managing containers, deployments, and scaling). Understanding of software engineering best practices ( CI/CD, version control, testing ). Experience with cloud platforms like AWS, GCP, or Azure is a plus. Knowledge of data warehousing and distributed data systems is a plus. Familiarity with Docker, Helm , and monitoring tools like Prometheus/Grafana is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

delhi

On-site

As a Backend System Engineer at Ciroos, you will be responsible for designing and implementing highly scalable, low-latency microservices architecture for observability platforms. Your role will involve building and maintaining cloud-native infrastructure using Kubernetes and modern container orchestration, as well as developing large-scale data analytics pipelines for real-time observability and monitoring. You will also be tasked with integrating AI agents with existing infrastructure systems and observability tools to optimize system performance, reliability, and throughput for high-volume data processing. In this role, you will design and implement distributed messaging systems and event-driven architectures, collaborating closely with AI/ML teams to enable seamless human-AI interactions in SRE workflows. Your responsibilities will also include building robust APIs and services that support real-time analytics and alerting, participating in system architecture decisions and technical design reviews, and ensuring system security, monitoring, and operational excellence. To qualify for this position, you should have at least 3 years of experience in building highly scalable, distributed backend systems. Proficiency in programming languages such as Rust, Go, and Python for backend development is required, along with deep hands-on experience in microservices, distributed databases, and messaging systems like Kafka and RabbitMQ. Strong knowledge of analytical databases such as ClickHouse, Apache Pinot, or similar columnar databases is essential, as well as experience with cloud platforms like AWS, GCP, or Azure, container orchestration using Kubernetes and Docker, and observability standards including OpenTelemetry, Prometheus, and monitoring best practices. A solid foundation in computer science principles including algorithms, data structures, and system design, along with knowledge of statistical methods, data analysis techniques, excellent problem-solving skills, and the ability to work effectively in a fast-paced, team-oriented environment are also key qualifications for this role. If you are excited by the challenges of AI-human interaction and enjoy solving complex distributed systems problems, this position offers you the opportunity to make a real impact in a cutting-edge startup environment at Ciroos.,

Posted 2 weeks ago

Apply

8.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

You are a highly skilled and experienced Tech Lead in BI & Analytics with 8 to 15 years of experience, based in Bangalore. As a Tech Lead, you will be responsible for spearheading data engineering and analytics initiatives. Your role will involve leading end-to-end BI & analytics projects, architecting scalable data solutions, developing APIs, mentoring a team, and collaborating with business stakeholders to translate reporting needs into technical solutions. You will be required to have a strong foundation in Python, cloud platforms (preferably Azure), modern data architectures, Clickhouse, Databricks, and API development. Additionally, experience with GenAI code assistance tools, Docker, Kubernetes, and containerized applications will be essential for this role. Key Responsibilities: - Lead end-to-end BI & analytics projects, from data ingestion to dashboard/report delivery. - Architect scalable data solutions using Clickhouse, Databricks, and Azure Cloud services. - Drive the design and development of APIs to support data services and integration. - Mentor and lead a team of engineers and analysts, ensuring best practices in code quality and analytics delivery. - Leverage GenAI-based code assist tools to enhance development productivity and accelerate solution delivery. - Implement containerized applications using Docker/Kubernetes for data deployment pipelines. - Collaborate with business stakeholders to understand reporting needs and translate them into actionable technical solutions. - Monitor, optimize, and troubleshoot system performance and data workflows. Required Skills & Qualifications: - 8 to 15 years of proven experience in BI, data engineering, and analytics. - Strong proficiency in Python for data scripting and API development. - Deep hands-on experience with Clickhouse and Databricks. - Proficient in Azure Cloud Services including Data Factory, Data Lake, Synapse, etc. - Experience working with GenAI tools such as GitHub Copilot or other code assistants is a strong plus. - Hands-on with containerization technologies like Docker and Kubernetes. - Strong analytical and problem-solving skills with a keen attention to detail. - Ability to lead technical teams and communicate effectively with stakeholders. Nice-to-Have: - Prior experience with CI/CD for data pipelines. - Exposure to other BI tools (Power BI, Tableau, etc.) is a plus. - Familiarity with data governance, security, and compliance practices.,

Posted 2 weeks ago

Apply

8.0 - 15.0 years

0 Lacs

karnataka

On-site

As a Tech Lead specializing in BI & Analytics, you will be responsible for leading the development of scalable data solutions. Your primary focus will be on managing BI and analytics projects from concept to deployment, architecting and developing APIs and microservices, and building and optimizing large-scale data pipelines and storage solutions using tools like ClickHouse and Databricks. You will also be driving development using Azure Cloud services such as Data Factory, Blob Storage, and Azure Databricks, leveraging Generative AI tools for development acceleration, and designing deployment workflows using Docker and/or Kubernetes. Collaboration with stakeholders and mentoring junior team members will be crucial to ensure timely delivery and high-quality output. The ideal candidate should have hands-on experience with Python, ClickHouse, Databricks, Azure Cloud, and containerization technologies. Additionally, a good understanding of GenAI-based code assist tools and API development is required. You should possess strong leadership skills with proven experience in leading and mentoring a technical team. Preferred qualifications include strong communication and collaboration skills, experience working in agile environments, and exposure to CI/CD tools and Git branching strategies. If you have 8 to 15 years of experience in BI, analytics, and backend development, along with proficiency in programming languages, BI & Analytics Tools, Cloud services, API development, containerization, and AI tools, we invite you to join our team as a Tech Lead BI & Analytics in Bangalore (Work from Office preferred) on a Full-Time basis. The joining timeline for this position is within 15 days.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a PL/SQL Developer, your primary responsibility will be to design, develop, and optimize PL/SQL scripts, stored procedures, and functions. You will also be tasked with managing and maintaining cloud databases on AWS RDS, ClickHouse, and PostgreSQL. Performance tuning, query optimization, and troubleshooting will be key aspects of your role to ensure database efficiency. It will be crucial to maintain data integrity, security, and compliance with industry standards. Collaboration with application developers to enhance database interactions will be essential. Automation of database processes and tasks using scripts and cloud-native tools will also fall under your purview. Monitoring database performance and availability through cloud monitoring tools will be a regular part of your responsibilities. Additionally, participation in database migration and modernization projects will be required. The ideal candidate for this position should have proven experience as a PL/SQL Developer with advanced SQL programming skills. Hands-on experience with AWS RDS, including provisioning, scaling, and backups, is necessary. Proficiency in ClickHouse for high-performance analytics and data warehousing is a must. A strong background in PostgreSQL database administration and optimization is also required. Experience with cloud platforms like AWS, Azure, or Google Cloud will be beneficial. Familiarity with database security best practices and data encryption is desired. The ability to troubleshoot complex database issues and optimize performance will be crucial. Excellent problem-solving skills and attention to detail are qualities that will contribute to success in this role. If you are passionate about database development and management, and possess the necessary skills and qualifications, we invite you to apply for this exciting opportunity at VT HIL with Candela.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Lifesight is a fast-growing SaaS company that is dedicated to assisting businesses in utilizing data and AI to enhance customer acquisition and retention. With a team of 130 professionals spread across 5 offices in the US, Singapore, India, Australia, and the UK, we serve over 300 customers globally. Our primary goal is to empower non-technical marketers with advanced data activation and marketing measurement tools driven by AI to enhance performance and achieve key performance indicators. We are rapidly expanding our product globally and are seeking talented individuals to join our team and contribute to our growth. As a Senior Software Engineer for the platform team at Lifesight, you will play a crucial role in developing core microservices. Your responsibilities will include building services to handle queries on a large scale, developing scalable APIs for user segmentation, and evolving the architecture to support millions of notifications per day across various channels like Email, SMS, and in-app notifications. Key Responsibilities: - Lead the development of modules and services, ensuring scalability, reliability, and fault tolerance - Code, design, prototype, and participate in reviews to maintain high-quality systems - Refactor applications and architectures continuously to uphold quality levels - Stay updated on the latest technologies in distributed systems and caching - Write readable, reusable, and extensible code daily Requirements: - Minimum 5 years of experience in designing, developing, testing, and deploying large-scale applications and microservices - Proficiency in Java and Springboot - Familiarity with cloud technologies, NoSQL stores, Kubernetes, and messaging systems - Strong teamwork skills with a willingness to learn - Experience in building low-latency, high-volume REST API requests - Proficiency in working with distributed caches like Redis - Ability to deliver results efficiently Preferred Skills: - Experience with containerization technologies like Docker and Kubernetes - Familiarity with cloud platforms, preferably GCP - Experience in NoSQL stores such as Cassandra, Clickhouse, and BigQuery What We Offer: - Opportunities for personal and professional growth - A collaborative work environment focused on innovative technology and engineering strategies - A culture that values work-life balance and personal well-being - Exciting team events and activities to foster a strong bond among team members Join us at Lifesight, one of the fastest-growing MarTech companies, and be part of the core team shaping the future of our product with cutting-edge technologies and a supportive work culture.,

Posted 2 weeks ago

Apply

8.0 - 15.0 years

0 Lacs

karnataka

On-site

As a Tech Lead specializing in Business Intelligence (BI) and Analytics, you will play a crucial role in leading cross-functional technical teams and driving the development of innovative solutions. With 8-15 years of experience, you will possess a strong expertise in BI tools, API development, and GenAI code assist tools. Your responsibilities will include hands-on work with Python, ClickHouse, Databricks, Azure Cloud, and containerization technologies. Your main focus will be on utilizing your programming skills in Python to develop robust BI solutions. You will leverage your hands-on experience with ClickHouse and Databricks to analyze and interpret complex datasets effectively. Proficiency in the Azure Cloud environment will be essential for deploying scalable solutions. Additionally, familiarity with containerization technologies such as Docker and Kubernetes will enable you to streamline deployment processes. Your exposure to Generative AI tools and APIs will be advantageous in exploring innovative approaches to BI and Analytics. Overall, your technical leadership and expertise will be instrumental in driving the success of our BI projects. If you are a dynamic individual with a passion for BI and Analytics, we look forward to welcoming you to our team. Regards, Santhosh HR@YVINNOVATIVECONSULTING.COM,

Posted 2 weeks ago

Apply

7.0 - 9.0 years

0 Lacs

india

On-site

About Fam (previously FamPay) Fam is India's first payments app for everyone above 11. FamApp helps make online and offline payments through UPI and FamCard. We are on a mission to raise a new, financially aware generation, and drive 250 million+ youngest users in India to kickstart their financial journey super early in their life. About this Role: We are looking for high-impact Technical Lead Manager to drive the development of scalable, high-performance systems for our fintech platform. You will play a crucial role in architecting, building, and optimizing distributed systems, data pipelines, and query performance. If you love solving complex engineering challenges and want to shape the future of financial technology, this role is for you. On the Job Lead and mentor a team of engineers, ensuring best practices in software development, high level system design, and low level implementation design Design and implement scalable, resilient, and high-performance distributed systems. Own technical decisions related to high level system design, infrastructure, and performance optimizations. Build and optimize large-scale data pipelines and query performance for efficient data processing. Work closely with product managers and stakeholders to translate business requirements into robust technical solutions. Ensure best practices in code quality, testing, CI/CD, and deployment strategies. Continuously improve system reliability, security, and scalability to handle fintech-grade workloads. Own the functional reliability and uptime of 24x7 live services, ensuring minimal downtime and quick incident resolution. Champion both functional and non-functional quality attributes, such as performance, availability, scalability, and maintainability. Must-haves (Min. qualifications) 7+ years of hands-on software development experience with a track record of building scalable and distributed systems. Minimum 1 year of experience managing/mentoring a team Should have worked on consumer-facing systems in a B2C environment. Expertise in high level system design and low level implementation design, with experience handling high-scale, low-latency applications. Strong coding skills in languages like Java, Go, Python, or Kotlin. Experience with databases and query optimizations, including PostgreSQL, MySQL, or NoSQL (DynamoDB, Cassandra etc.). Deep understanding of data pipelines, event-driven systems, and distributed messaging systems like Kafka, RabbitMQ, etc. Proficiency in cloud platforms like AWS and infrastructure-as-code (Terraform, Kubernetes). Strong problem-solving and debugging skills with a passion for performance optimization. Good to have Prior experience in fintech, payments, lending, or banking domains. Exposure to real-time analytics and big data technologies (Spark, Flink, Presto, Clickhouse, etc.). Experience with containerization and microservices architecture. Open-source contributions or active participation in tech communities. Why join us Be part of an early-stage fintech startup solving real-world financial challenges. Work on cutting-edge technology that handles millions of transactions daily. Opportunity to lead and grow in a high-impact leadership role. Collaborate with a world-class team of engineers, product leaders, and fintech experts. In-person role in Bengaluru - an exciting, fast-paced work environment! Collaborating directly with(Co-founder) and (Founding Team - Head, Engineering) to lead the engineering team, develop scalable solutions, and ensure seamless delivery of key projects. Perks That Go Beyond the Paycheck . Relocation assistance to make your move seamless. . Free office meals (lunch & dinner). . Generous leave policy, including birthday leave, period leave, paternity and maternity support, and more. . Salary advance and loan policies for any financial help. . Quarterly rewards and recognition programs, and a referral program with great incentives. . Access the latest gadgets and tools. . Comprehensive health insurance for you and your family, mental health support. . Tax benefits with options like food coupons, phone allowances, car/device leasing. . Retirement perks like PF contribution, leave encashment and gratuity. Here's all the tea on FamApp ?? FamApp focuses on financial inclusion of the next generation by providing UPI & card payments to everyone above 11 years old. Our flagship Spending Account, FamX, seamlessly integrates UPI and card payments, enabling users to manage, save, and learn about their finances effortlessly. Revolutionizing Payments and FinTech FamApp has enabled 10 million+ users to make UPI and card payments across India, removing the inconvenience of carrying cash everywhere. Users get to customise their FamX card with doodles, which lets them add a personal touch to their payments. Trusted by leading investors We're proud to be supported by renowned investors like Elevation Capital, Y-Combinator, Peak XV (formerly Sequoia Capital India), Venture Highway, Global Founder's Capital, and esteemed angels Kunal Shah and Amrish Rao. Join Our Dynamic Team At Fam, our people-first approach is reflected in our generous leave policies, flexible work schedules, comprehensive health benefits, and free mental health sessions. We don't mean to brag, but we promise you'll be surrounded by some of the most fun, talented and passionate people in the startup space. Want to see what makes life at Fam so awesome Check out our shenanigans at ????

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Requisition ID # 25WD90895 Position Overview We are seeking talented engineers to join our Horizontal Capabilities Team. The team is responsible for building the core frameworks, shared services, reusable components and productivity tools that enable faster and more reliable delivery across all product lines. This includes designing and implementing common modules, frameworks, and automation solutions that eliminate duplication and allow product teams to focus on business innovation. Our focus is on creating scalable and extensible systems that serve as building blocks for product teams. By joining this team, you will play a key role in strengthening the foundation of all applications and enabling consistent, faster, and higher-quality delivery. Location Bengaluru, IN (Hybrid). Responsibilities Design and develop scalable, highly available and fault-tolerant applications across the full stack using React on the frontend and Node.js / Java / Go on the backend Work with relational and NoSQL databases such as Postgres, MySQL, DynamoDB etc Implement automation and CI/CD pipelines to ensure fast, reliable, and repeatable deployments using Jenkins, GitHub Actions or similar tools along with infrastructure-as-code tools like terraform, AWS CDK Develop and maintain automated test suites - unit, integration, and E2E Work with messaging systems such as Kafka or similar to enable event-driven architectures. Build and maintain ETL pipelines and data workflows for ingesting, transforming, and analyzing large datasets Design schemas, optimize queries, and manage workloads for analytical systems such as Amazon Redshift, BigQuery, Snowflake or ClickHouse Monitor, troubleshoot, and optimize for speed, scalability, and uptime Participate in code reviews, mentor junior engineers, and engage in architectural discussions Participate in on-call rotation to support production system Preferred Qualifications 5+ years of professional software development experience with full-stack applications. Proficiency in React.js and at least one of Node.js, Java or Go. Strong understanding of distributed systems and microservices architecture AWS cloud experience preferred Strong experience with both relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB, etc.) Experience with Kafka or similar messaging systems for distributed data processing. Good understanding of data engineering workflows and ETL processes. Strong experience with CI/CD tools and with infrastructure-as-code tools like Terraform or AWS CDK. Familiarity with automation and testing frameworks Excellent problem-solving skills and ability to work in a collaborative team environment. #LI-RV1 Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk - it's at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world. When you're an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future Join us! Salary transparency Salary is one part of Autodesk's competitive compensation package. Offers are based on the candidate's experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: Are you an existing contractor or consultant with Autodesk Please search for open jobs and apply internally (not on this external site).

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have hands-on experience working with Columnar Databases (e.g., ClickHouse), Relational Databases (e.g., MySQL, PostgreSQL), and Document Databases (e.g., MongoDB). It is essential to possess proven expertise in setting up, configuring, scaling, and maintaining ClickHouse and MongoDB clusters. A strong background in schema design and development customized for specific use cases to ensure optimal performance and scalability is required. You must demonstrate the ability to optimize database performance through query tuning, indexing strategies, and resource management. Additionally, familiarity with backup, disaster recovery, and monitoring best practices for maintaining high-availability database environments is crucial.,

Posted 2 weeks ago

Apply

13.0 - 23.0 years

0 Lacs

karnataka

On-site

As a skilled Database Administrator with 23 years of hands-on experience, you will be responsible for managing and optimizing relational database systems, focusing on MariaDB and ClickHouse. Your primary tasks will include ensuring high availability, performance, and security of mission-critical databases, as well as supporting automation, deployment, and troubleshooting efforts. Your responsibilities will include installing, configuring, and maintaining ClickHouse in both development and production environments. You will also support and manage at least one other RDBMS such as PostgreSQL, MySQL, or SQL Server. Designing and implementing High Availability (HA) and Disaster Recovery (DR) solutions for ClickHouse, optimizing query performance, automating database administration tasks, setting up monitoring, and implementing backup and recovery processes will be crucial aspects of your role. Collaborating with engineering teams to build scalable data solutions, troubleshooting performance issues, documenting system configurations, and ensuring compliance with security and data governance policies are also part of your responsibilities. Additionally, you will participate in disaster recovery testing and planning. To excel in this role, you should have 13 years of experience managing ClickHouse and other RDBMS, proficiency in SQL scripting and performance tuning, experience with replication, sharding, or clustering, familiarity with monitoring tools, proficiency in scripting for automation, exposure to CI/CD pipelines and DevOps tools, understanding of database security, networking, and compliance standards, and strong analytical thinking, communication, and problem-solving skills. If you are a proactive and experienced professional looking to contribute your expertise to a dynamic team, we encourage you to apply for this Database Administrator position.,

Posted 2 weeks ago

Apply

5.0 - 13.0 years

0 Lacs

karnataka

On-site

As a Senior Staff Site Reliability Engineer at SolarWinds, you will be instrumental in enhancing the reliability and performance of the SolarWinds Observability Platform. Your role will involve close collaboration with various engineering teams to effectively manage and reduce SaaS backlogs, ensuring that our platform scales efficiently while upholding the highest standards of reliability and performance. Your capacity to drive initiatives, offer technical leadership, and optimize intricate systems will be crucial to our triumph. Your responsibilities will encompass leading and steering strategic initiatives aimed at enhancing the reliability, scalability, and performance of the SolarWinds Observability Platform, with a specific emphasis on reducing SaaS backlogs. Collaborating with cross-functional teams to identify, prioritize, and resolve outstanding backlog items, including incidents, infrastructure enhancements, performance optimization, and automation will be a key aspect of your role. You will also lead the development of automation strategies and observability tools to enhance platform monitoring, diminish incidents, and boost performance insights across the infrastructure. Additionally, you will be at the forefront of response efforts for production incidents, conducting comprehensive postmortems, steering continuous improvement initiatives, and ensuring that the team learns from each incident. Driving initiatives related to platform engineering and scaling infrastructure systems to meet the required reliability and performance standards for the SolarWinds Observability Platform will also fall under your purview. Furthermore, nurturing and offering technical guidance to the Site Reliability Engineering (SRE) team to assist them in enhancing their skills and fostering a culture of continuous learning and collaboration will be an essential part of your role. To excel in this position, you should possess over 13 years of experience in Site Reliability Engineering, Platform Engineering, or related roles, with substantial experience in managing SaaS environments. Expertise in designing, constructing, and maintaining AWS/Azure infrastructure, employing Terraform and automation tools for over 8 years, as well as experience in building, operating, and scaling Kubernetes clusters in production environments for over 5 years is also required. Strong familiarity with Observability tools, such as monitoring, logging, tracing, and metrics, and practices for high-performance systems is essential. Additionally, you should have proficiency in Kafka for real-time data processing, ClickHouse for OLAP workloads, and GitOps CI/CD processes. Experience with Karpenter for Kubernetes autoscaling and Buf for managing Protocol Buffers at scale would be advantageous. Proficiency in programming languages such as Python, Go (Golang), and Bash is necessary. Knowledge of security best practices for cloud-native environments, including encryption, key management, and security policies, as well as experience in mentoring and cultivating technical teams to encourage a culture of collaboration and continuous learning, are also preferred qualifications. If you are a proactive individual who excels in a challenging and fast-paced environment, and are driven by a desire to make a significant impact, then we invite you to apply for this exciting opportunity at SolarWinds. Join us in our mission to deliver exceptional solutions and grow together as part of an outstanding team. All applications will be handled in compliance with the SolarWinds Privacy Notice.,

Posted 3 weeks ago

Apply

6.0 - 8.0 years

10 - 16 Lacs

thiruvananthapuram

Work from Office

Job Description Senior Full-Stack Developer Position: Senior Full-Stack Developer Experience: 6+ years Location: Trivandrum Employment Type: Full-time About the Role We are seeking a highly skilled Senior Full-Stack Developer with proven expertise in building scalable, production-grade web applications. The ideal candidate will be adept at architecting and implementing robust backend systems, developing high-performance frontend applications, and handling real-time data workflows. This role requires hands-on technical expertise, strong problem-solving skills, and the ability to collaborate effectively across teams. You will be a key contributor in designing and developing data-intensive applications, managing sessions, enabling real-time analytics, implementing bulk operations, and delivering seamless user experiences. Additionally, your experience in browser extension development will be highly valued. Key Responsibilities Application Development & Architecture Design, develop, and deploy scalable backend services using Node.js (ES6+) , Express.js , and MongoDB . Architect and maintain high-performance APIs, data pipelines, and real-time processing systems leveraging Redis , RabbitMQ , and ClickHouse . Build and maintain modern frontend applications using Vue 3 (Options API) , Quasar v2 , Vite , and Pinia . Feature Development & Optimization Develop interactive, user-friendly, and responsive UIs with smooth navigation using Vue Router 4 . Implement session management, authentication flows, and role-based access control. Handle bulk operations and large datasets efficiently. Build real-time dashboards and analytics features . Browser Extensions Develop and maintain Chrome extensions using Quasar BEX to extend application functionality directly within the browser. Workflow & Tools Integrate APIs and external services with Ky for streamlined communication. Manage date/time and localization with MomentJS . Create interactive onboarding and guided user flows with shepherd.js . Implement drag-and-drop functionality using vuedraggable . Parse and process large CSV datasets with papaparse . Collaboration & Leadership Work closely with cross-functional teams to translate business requirements into technical solutions. Provide technical leadership, mentoring, and code reviews for junior developers. Ensure best practices in code quality, testing, performance optimization, and documentation. Collaborate with stakeholders and non-technical team members to align product development with business goals. Key Skills & Technologies Backend: Node.js (ES6+), Express.js, MongoDB, Redis, RabbitMQ, ClickHouse Frontend: Vue 3 (Options API), Quasar v2, Vite, Pinia, Vue Router 4 Tools & Libraries: Ky, MomentJS, shepherd.js, vuedraggable, papaparse Other Expertise: Chrome Extension Development (Quasar BEX), real-time analytics, session management, bulk operations Qualifications & Requirements Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience). 6+ years of proven experience as a Full-Stack Developer delivering production-ready applications. Strong understanding of data-driven workflows, distributed systems, and real-time processing . Demonstrated ability to work independently with minimal guidance while also thriving in team environments. Strong problem-solving, debugging, and analytical skills. Excellent communication and collaboration skills, with the ability to interact effectively with both technical and non-technical stakeholders. Preferred Qualifications Experience working with high-traffic, enterprise-scale applications . Knowledge of microservices architecture and containerization (Docker/Kubernetes). Familiarity with CI/CD pipelines and automated testing frameworks. Exposure to cloud platforms (AWS, GCP, Azure).

Posted 3 weeks ago

Apply

6.0 - 8.0 years

12 - 18 Lacs

thiruvananthapuram

Work from Office

Skill Set:Node.js (ES6+), Express.js, MongoDB, Redis, RabbitMQ, ClickHouse, Vue 3 (Options API), Quasar v2, Vite, Pinia, Vue Router 4, Ky, MomentJS, shepherd.js, vuedraggable, papaparse, Chrome Extension Development (Quasar BEX)

Posted 3 weeks ago

Apply

10.0 - 17.0 years

15 - 30 Lacs

mangaluru, bengaluru

Work from Office

Role & responsibilities: Design and lead technical architecture for secure, multi-tenant systems. Architect resilient microservices using Node.js, Python, and Azure. Integrate OpenAI LLMs and Kore.ai conversational AI workflows for intelligent experiences. Guide prompt engineering strategy and LLM fine-tuning aligned with product goals. Drive implementation of high-throughput messaging and data systems using Redis, RabbitMQ, Clickhouse, VectorDB, and MongoDB. Manage configuration and performance tuning of APISIX (API Gateway) and Nginx (Reverse Proxy). Lead design for context-aware session management and semantic relevance frameworks. Define secure architectural patterns with RBAC, token lifecycle, and deep observability. Collaborate with and mentor backend, frontend, and DevOps teams to ensure system alignment. Establish and maintain architectural documentation and technical standards. Preferred candidate profile: Deep experience in cloud-native architecture and microservices design. Hands-on skills with Node.js, Python, React, and Azure. Proven success in LLM integration with tools like OpenAI and Kore.ai. Strong grasp of messaging, caching, and high-performance data pipelines. Familiarity with APISIX, reverse proxies, and distributed systems security. Experience with vector databases, time-series storage, and semantic search.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be contributing to the development of software applications at Trackier using GoLang. Your responsibilities will include identifying key tasks, prioritizing them, and delivering them throughout the software development life cycle. Troubleshooting, debugging, and upgrading existing software will also be part of your role. It will be essential for you to stay updated with the latest trends and technologies to enhance the products and processes at Trackier. Additionally, mentoring junior software engineers and guiding their development will be a key aspect of your job. Your tasks will involve designing and writing using GoLang to enhance the availability, scalability, latency, and efficiency of services. You will be responsible for designing, building, analyzing, and fixing large-scale systems. Continuous improvement and innovation to enhance code quality and product performance will be vital. Effective collaboration and communication with teams across multiple departments will also be required. To be successful in this role, you should have proficiency in at least one of the following: Node.js (JavaScript/TypeScript), Golang, Python, PHP, or Java for developing and enhancing software applications. A minimum of 2-3 years of related work experience in building software applications is necessary. Familiarity with various operating systems such as Linux, Mac OS, and Windows is expected. Experience in private and public API design, cross-service integrations, distributed systems challenges, micro-service-based architectures, and asynchronized communication will be beneficial. Familiarity with a range of database technologies, including SQL (e.g., MySQL, PostgreSQL, ClickHouse) and NoSQL (e.g., MongoDB, Cassandra) options, is required. Knowledge and understanding of Docker, Kubernetes, AWS/GCP services, CI/CD tools like GitHub Actions and Jenkins, and working on production distributed systems with microservices architecture and RESTful services are important. Experience in using GIT in a collaborative setting and deploying production applications with AWS/GCP will be advantageous. Technologies used at Trackier include Golang, Node.js (JavaScript/TypeScript), PHP, MongoDB, ClickHouseDB, Redis, Memcache, Google Cloud Platform, Docker, Kubernetes, and Jenkins. In return, you will benefit from medical insurance, a 5-day working culture, a best-in-industry salary structure, and a lucrative reimbursement policy.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

delhi

On-site

We are seeking a skilled and enthusiastic IoT Software Engineer to become a valuable member of our rapidly expanding team. Your role will involve playing a key part in constructing scalable backend systems, managing substantial datasets in real-time, and collaborating across a contemporary data and cloud stack. As an IoT Software Engineer, you will contribute to the development and enhancement of various aspects of our technological infrastructure. The ideal candidate for this role should possess the following qualifications and skills: - Proficiency in programming languages such as JavaScript/TypeScript (Express.js, Next.js) and Go. - Previous experience with NoSQL databases like MongoDB and columnar databases such as ClickHouse. - Thorough knowledge of SQL, including expertise in query optimization and analysis of extensive datasets. Ability to construct and sustain ETL pipelines. - Familiarity with geospatial queries and integration of Google Maps APIs. - Hands-on experience with Kafka for real-time data streaming and Redis for caching/queuing purposes. - Strong grasp of system design principles and distributed systems. - Previous exposure to data visualization tools like Superset or Metabase. - Familiarity with logging and monitoring tools like Datadog, Grafana, or Prometheus. Additionally, the following attributes are required for this position: - Prior experience in handling IoT data originating from embedded or telemetry systems. Desirable skills that would be advantageous for this role include: - Knowledge of Docker and experience in deploying containerized pipelines. - Background in edge computing or low-latency systems would be considered a plus. This is a full-time position located in Delhi, India. The preferred candidate should have a minimum of 7 years of relevant experience and be fluent in English. Candidates from any location in India are welcome to apply but must be willing to relocate to Delhi. If you meet the specified requirements and are excited about this opportunity, please submit your resume to deepali@xlit.co.,

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

You are looking for an experienced ClickHouse Administrator to manage the complete lifecycle of on-prem ClickHouse clusters. This includes tasks such as architecture and deployment, security implementation, monitoring, and disaster recovery. Your role will involve working closely with cross-functional teams to ensure high availability, scalability, and compliance. Your key responsibilities will include designing and implementing on-prem ClickHouse deployments with high availability and scalability, installing, configuring, and upgrading ClickHouse servers and client tools, defining and enforcing security policies, monitoring cluster health, capacity planning, performance tuning, developing backup and disaster recovery strategies, and collaborating with networking, storage, and security teams for compliance requirements. To qualify for this role, you should have a Bachelor's degree in Computer Science, Information Systems, or equivalent, along with at least 5 years of database administration experience, including a minimum of 1 year working with ClickHouse. Strong Linux administration skills, proficiency in Bash and/or Python scripting, knowledge of networking, storage, and virtualization, as well as familiarity with security frameworks are also required. Experience with Kubernetes/Docker and a ClickHouse Certified Developer certification would be a bonus. Additionally, it would be nice to have experience with monitoring stacks such as Prometheus and Grafana, as well as hands-on experience with configuration management tools like Ansible, Chef, or Puppet.,

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At SolarWinds, were a people-first company. Our purpose is to enrich the lives of the people we serveincluding our employees, customers, shareholders, Partners, and communities. Join us in our mission to help customers accelerate business transformation with simple, powerful, and secure solutions. The ideal candidate thrives in an innovative, fast-paced environment and is collaborative, accountable, ready, and empathetic. Were looking for individuals who believe they can accomplish more as a team and create lasting growth for themselves and others. We hire based on attitude, competency, and commitment. Solarians are ready to advance our world-class solutions in a fast-paced environment and accept the challenge to lead with purpose. If youre looking to build your career with an exceptional team, youve come to the right place. Join SolarWinds and grow with us! About the Team The Observability Platform team at SolarWinds develops the core services and APIs that power our next-generation observability products. The Telemetry & Data APIs team focuses on building scalable APIs and backend systems that allow internal teams and customers to retrieve, query, and analyze vast volumes of telemetry data in real time. Were looking for a Senior Software Engineer to join our Telemetry and Data APIs team , building scalable APIs and services that power customer-facing telemetry features in our platform. This role is ideal for engineers who enjoy working with data-heavy systems , API design , and optimizing data queries not for those seeking traditional ETL or pure data science work. You will design and maintain systems that ingest, process, and expose telemetry data (metrics, logs, traces) through well-designed APIs, enabling customers to understand and act on their data efficiently. What Youll Do: Design, build, and maintain REST and GraphQL APIs that expose telemetry data to customers. Write and optimize Clickhouse queries for high-performance telemetry data retrieval. Build scalable backend services using Java or Kotlin with Spring Boot . Collaborate with product and front-end teams to deliver intuitive telemetry features. Ensure systems are observable, reliable, secure, and easy to operate in production. Participate in code reviews and design discussions, mentoring others where applicable. What Were Looking For: 5+ years of software engineering experience building scalable backend services. Proficiency in Java or Kotlin ; experience with Spring/Spring Boot frameworks. Experience designing and building RESTful and/or GraphQL APIs . Comfort writing and optimizing SQL queries (Clickhouse experience a plus). Familiarity with TypeScript/JavaScript and ability to navigate front-end code if needed. Understanding of cloud environments (AWS, Azure, GCP) and container orchestration (Kubernetes). Strong grasp of system design, data structures, and algorithms . Nice to Have: Experience with time-series data, telemetry systems, or observability platforms . Exposure to GraphQL server implementation and schema design. Experience in SaaS environments with high-scale data workloads. Familiarity with modern CI/CD practices and DevOps tooling. SolarWinds is an Equal Employment Opportunity Employer. SolarWinds will consider all qualified applicants for employment without regard to race, color, religion, sex, age, national origin, sexual orientation, gender identity, marital status, disability, veteran status or any other characteristic protected by law. All applications are treated in accordance with the SolarWinds Privacy Notice: https://www.solarwinds.com/applicant-privacy-notice Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are a skilled Full Stack Developer who will be joining our Product Engineering team. In this role, you will be responsible for designing and developing robust applications, collaborating with multi-location teams, and delivering high-quality solutions within agreed timelines. It is essential to stay current with new technologies and design principles to succeed in this position. Your responsibilities will include designing and developing technical solutions based on requirements, building and maintaining enterprise-grade SaaS software using Agile methodologies, contributing to performance tuning and optimization efforts, executing comprehensive unit tests for product components, participating in peer code reviews, and championing high quality, scalability, and timely project completion. You will be utilizing technologies such as Golang/Core Java, J2EE, Struts, Spring, client-side scripting, Hibernate, and various databases to build scalable core-Java applications, web applications, and web services. To qualify for this role, you should have a Bachelor's degree in Engineering, Computer Science, or equivalent experience. Additionally, you must possess a solid understanding of data structures, algorithms, and their applications, hands-on experience with Looker APIs, dashboards, and LookML, strong problem-solving skills, and analytical reasoning. Experience in building microservices with Golang/Spring Boot, developing and consuming REST APIs, profiling applications, using at least one front-end framework (e.g., Angular or Vue), and familiarity with basic SQL queries is required. Excellent written and verbal communication and presentation skills, a good understanding of the Software Development Life Cycle (SDLC), and proven software development experience with Java Spring Boot, Kafka, SQL, Linux, Apache, and Redis are essential. Experience with AWS cloud technologies (Go, Python, MongoDB, Postgres, ClickHouse) would be a plus.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a passionate full-stack developer, you will have the opportunity to join a team that is creating a SaaS platform for Enterprise Cloud. In this role, you will be responsible for designing and implementing next-gen multi-cloud features in a fast-paced, agile environment. Nutanix Cloud Manager (NCM) Cost Governance, a SaaS offering by Nutanix, aims to provide organizations with visibility into their hybrid multi-cloud spending. The team led by the Manager of Engineering at NCM - Cost Governance is committed to making fin-ops easier for end-users and offering insights on running an optimized infrastructure. You will work alongside a team that is eager to build a fin-ops Platform as part of Nutanix's vision to simplify Multi-Cloud & Hybrid-Cloud management. The team values self-initiative, ownership, and enthusiasm in building exceptional products. Your role will involve being part of the development team to build web-scale SaaS products, with a focus on application development using Java & JavaScript. You will translate requirements into design specifications, implement new features, troubleshoot and resolve issues, mentor junior developers/interns, and enhance performance and scalability of internal components. To excel in this role, you should bring 3-4 years of software development experience, hands-on expertise in backend using Java (Spring/Spring Boot framework) and frontend using JavaScript (Angular, React, or Vue framework), along with proficiency in version control / DevOps tools. Additionally, knowledge of SQL or NoSQL databases, strong problem-solving skills, and a willingness to learn new technologies are essential. A background in computer science or a related field is preferred. Desirable skills include hands-on experience with Python, Go, knowledge of web application security, and development experience in building distributed systems/micro-services on public/private clouds. Familiarity with distributed data management concepts and design/implementation trade-offs in building high-performance & fault-tolerant distributed systems is a plus. This role offers a hybrid work environment, combining remote work benefits with in-person collaboration. Most roles will require a minimum of 3 days per week in the office, with specific guidance provided by your manager based on team requirements.,

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 7 Lacs

Pune, Maharashtra, India

On-site

We are seeking a skilled Data Engineer with strong hands-on experience in ClickHouse, Kubernetes, SQL, and Python . You will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. This role requires expertise in data modeling, container orchestration, and collaboration with data scientists and analysts to ensure data quality and meet business needs. Roles & Responsibilities: Design, build, and maintain scalable and efficient data pipelines and ETL processes . Develop and optimize ClickHouse databases for high-performance analytics, including data modeling, query optimization, and performance tuning. Create RESTful APIs using FastAPI to expose data services. Work with Kubernetes for container orchestration and deployment of data services. Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and ClickHouse . Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. Monitor, troubleshoot, and improve the performance of data infrastructure. Skills Required: Strong experience in ClickHouse (data modeling, query optimization, performance tuning). Expertise in SQL , including complex joins, window functions, and optimization. Proficient in Python , especially for data processing ( Pandas, NumPy ) and scripting. Experience with FastAPI for creating lightweight APIs and microservices. Hands-on experience with PostgreSQL (schema design, indexing, and performance). Solid knowledge of Kubernetes (managing containers, deployments, and scaling). Understanding of software engineering best practices ( CI/CD, version control, testing ). Experience with cloud platforms like AWS, GCP, or Azure is a plus. Knowledge of data warehousing and distributed data systems is a plus. Familiarity with Docker, Helm , and monitoring tools like Prometheus/Grafana is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 7 Lacs

Remote, , India

On-site

We are seeking a skilled Data Engineer with strong hands-on experience in ClickHouse, Kubernetes, SQL, and Python . You will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. This role requires expertise in data modeling, container orchestration, and collaboration with data scientists and analysts to ensure data quality and meet business needs. Roles & Responsibilities: Design, build, and maintain scalable and efficient data pipelines and ETL processes . Develop and optimize ClickHouse databases for high-performance analytics, including data modeling, query optimization, and performance tuning. Create RESTful APIs using FastAPI to expose data services. Work with Kubernetes for container orchestration and deployment of data services. Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and ClickHouse . Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. Monitor, troubleshoot, and improve the performance of data infrastructure. Skills Required: Strong experience in ClickHouse (data modeling, query optimization, performance tuning). Expertise in SQL , including complex joins, window functions, and optimization. Proficient in Python , especially for data processing ( Pandas, NumPy ) and scripting. Experience with FastAPI for creating lightweight APIs and microservices. Hands-on experience with PostgreSQL (schema design, indexing, and performance). Solid knowledge of Kubernetes (managing containers, deployments, and scaling). Understanding of software engineering best practices ( CI/CD, version control, testing ). Experience with cloud platforms like AWS, GCP, or Azure is a plus. Knowledge of data warehousing and distributed data systems is a plus. Familiarity with Docker, Helm , and monitoring tools like Prometheus/Grafana is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 7 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking a skilled Data Engineer with strong hands-on experience in ClickHouse, Kubernetes, SQL, and Python . You will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. This role requires expertise in data modeling, container orchestration, and collaboration with data scientists and analysts to ensure data quality and meet business needs. Roles & Responsibilities: Design, build, and maintain scalable and efficient data pipelines and ETL processes . Develop and optimize ClickHouse databases for high-performance analytics, including data modeling, query optimization, and performance tuning. Create RESTful APIs using FastAPI to expose data services. Work with Kubernetes for container orchestration and deployment of data services. Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and ClickHouse . Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. Monitor, troubleshoot, and improve the performance of data infrastructure. Skills Required: Strong experience in ClickHouse (data modeling, query optimization, performance tuning). Expertise in SQL , including complex joins, window functions, and optimization. Proficient in Python , especially for data processing ( Pandas, NumPy ) and scripting. Experience with FastAPI for creating lightweight APIs and microservices. Hands-on experience with PostgreSQL (schema design, indexing, and performance). Solid knowledge of Kubernetes (managing containers, deployments, and scaling). Understanding of software engineering best practices ( CI/CD, version control, testing ). Experience with cloud platforms like AWS, GCP, or Azure is a plus. Knowledge of data warehousing and distributed data systems is a plus. Familiarity with Docker, Helm , and monitoring tools like Prometheus/Grafana is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 7 Lacs

Gurgaon, Haryana, India

On-site

We are seeking a skilled Data Engineer with strong hands-on experience in ClickHouse, Kubernetes, SQL, and Python . You will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. This role requires expertise in data modeling, container orchestration, and collaboration with data scientists and analysts to ensure data quality and meet business needs. Roles & Responsibilities: Design, build, and maintain scalable and efficient data pipelines and ETL processes . Develop and optimize ClickHouse databases for high-performance analytics, including data modeling, query optimization, and performance tuning. Create RESTful APIs using FastAPI to expose data services. Work with Kubernetes for container orchestration and deployment of data services. Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and ClickHouse . Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. Monitor, troubleshoot, and improve the performance of data infrastructure. Skills Required: Strong experience in ClickHouse (data modeling, query optimization, performance tuning). Expertise in SQL , including complex joins, window functions, and optimization. Proficient in Python , especially for data processing ( Pandas, NumPy ) and scripting. Experience with FastAPI for creating lightweight APIs and microservices. Hands-on experience with PostgreSQL (schema design, indexing, and performance). Solid knowledge of Kubernetes (managing containers, deployments, and scaling). Understanding of software engineering best practices ( CI/CD, version control, testing ). Experience with cloud platforms like AWS, GCP, or Azure is a plus. Knowledge of data warehousing and distributed data systems is a plus. Familiarity with Docker, Helm , and monitoring tools like Prometheus/Grafana is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies