Jobs
Interviews

100 Clickhouse Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

bangalore, karnataka

On-site

As a Senior Software Engineer on the Telemetry and Data APIs team at SolarWinds, you will play a crucial role in designing and maintaining REST and GraphQL APIs that expose telemetry data to customers. Your responsibilities will include writing and optimizing Clickhouse queries for high-performance data retrieval, building scalable backend services using Java or Kotlin with Spring Boot, and collaborating with product and front-end teams to deliver intuitive telemetry features. Additionally, you will ensure that systems are observable, reliable, secure, and easy to operate in production, while participating in code reviews and design discussions, and mentoring others where applicable. **Key Responsibilities:** - Design, build, and maintain REST and GraphQL APIs for customer-facing telemetry features. - Write and optimize Clickhouse queries for high-performance telemetry data retrieval. - Develop scalable backend services using Java or Kotlin with Spring Boot. - Collaborate with product and front-end teams to deliver intuitive telemetry features. - Ensure systems are observable, reliable, secure, and easy to operate in production. - Participate in code reviews and design discussions, mentoring others where applicable. **Qualifications Required:** - 5+ years of software engineering experience building scalable backend services. - Proficiency in Java or Kotlin; experience with Spring/Spring Boot frameworks. - Experience designing and building RESTful and/or GraphQL APIs. - Comfort writing and optimizing SQL queries (Clickhouse experience a plus). - Familiarity with TypeScript/JavaScript and ability to navigate front-end code if needed. - Understanding of cloud environments (AWS, Azure, GCP) and container orchestration (Kubernetes). - Strong grasp of system design, data structures, and algorithms. As a bonus, experience with time-series data, telemetry systems, or observability platforms, exposure to GraphQL server implementation and schema design, experience in SaaS environments with high-scale data workloads, and familiarity with modern CI/CD practices and DevOps tooling will be considered advantageous. All applications are treated in accordance with the SolarWinds Privacy Notice.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Forward Deployed Software Engineer at Arcana, you will be embedded on the frontlines of high-stakes financial technology, offering you a rocketship growth opportunity. Our team at Arcana specializes in building institutional-grade analytics for leading hedge funds and asset managers. We focus on designing and deploying backend systems that power mission-critical performance and risk analysis. Your role will involve owning the full lifecycle of backend systems, from design and prototyping to deployment and scaling. Key Responsibilities: - Work directly with portfolio managers, risk managers, Chief Investment Officers, and analysts from the world's top hedge funds and asset managers. - Partner closely with product, engineering, and client stakeholders to design and deploy backend services for high-stakes analytics. - Architect and scale compute- and I/O-intensive Python systems, optimizing for throughput, concurrency, and reliability. - Build and optimize data layers (Redis, PostgreSQL, ClickHouse) to handle large volumes with speed and integrity. - Take ownership across the full SDLC: system design, coding, testing, CI/CD, and operational excellence. - Embed deeply with users to align technical solutions with real institutional workflows and business needs. - Champion clear communication and cross-functional collaboration to ensure alignment and impact. Qualifications Required: - Bachelors or Masters in CS/Engineering from a top IIT/NIT (Tier 1) with 8.5+ GPA. - Top 10% academic and professional performance. - 3+ years of backend experience building production systems in Python. - Deep expertise in data structures, algorithms, distributed systems, and performance optimization. - Proven success scaling high-concurrency, high-throughput applications. - Familiarity with Redis, PostgreSQL, ClickHouse; fintech or mission-critical domain experience is a plus. - Exceptional communication and stakeholder management skills. - Comfort with CI/CD and operational ownership of live services.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

coimbatore, tamil nadu

On-site

Join our dynamic team at the forefront of cutting-edge technology as we are looking for a seasoned Staff/Lead Backend Engineer for our Coimbatore office. You will embark on a journey where your deep-rooted expertise in computer science fundamentals, alongside an intricate understanding of data structures, algorithms, and system design, becomes the cornerstone of innovative solutions. This pivotal role demands proficiency in developing and elevating compute and I/O-intensive applications, ensuring peak performance and unwavering reliability. Responsibilities: - Architect, refine, and escalate the capabilities of complex backend systems using Python, focusing on efficiency, durability, and scale. - Elevate application performance, optimizing for speed, scalability, and resource allocation. - Forge robust methodologies to manage high concurrency and vast data volumes, setting new industry benchmarks. - Collaborate closely with engineering and product peers to crystallize requirements into resilient, scalable architectures. - Demonstrate proficiency with advanced storage solutions and databases like Redis, PostgreSQL, and ClickHouse, enhancing system integrity. - Champion coding excellence, testing rigor, and deployment precision, driving best practices across the development lifecycle. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - Minimum of 5 years of experience in backend development with Python in a production environment. - Proven experience in scaling compute and I/O-intensive applications. - Strong foundation in computer science, with a deep understanding of data structures, algorithms, and system design principles. - Experience in handling concurrent requests at scale and optimizing large-scale systems for performance and reliability. - Familiarity with database technologies such as Redis, PostgreSQL, and ClickHouse. - Experience in the financial sector, particularly in developing fintech applications or systems, is a plus. - Solid understanding of the software development life cycle, continuous integration, and continuous delivery (CI/CD) practices. - Excellent problem-solving abilities and strong communication skills.,

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Arista Networks is an industry leader in data-driven, client-to-cloud networking for large data center, campus and routing environments. Arista is a well-established and profitable company with over $8 billion in revenue. Aristas award-winning platforms, ranging in Ethernet speeds up to 800G bits per second, redefine scalability, agility, and resilience. Arista is a founding member of the Ultra Ethernet consortium. We have shipped over 20 million cloud networking ports worldwide with CloudVision and EOS, an advanced network operating system. Arista is committed to open standards, and its products are available worldwide directly and through partners. At Arista, we value the diversity of thought and perspectives each employee brings. We believe fostering an inclusive environment where individuals from various backgrounds and experiences feel welcome is essential for driving creativity and innovation. Our commitment to excellence has earned us several prestigious awards, such as the Great Place to Work Survey for Best Engineering Team and Best Company for Diversity, Compensation, and Work-Life Balance. At Arista, we take pride in our track record of success and strive to maintain the highest quality and performance standards in everything we do. Job Description Who Youll Work With SRE&aposs at Arista combine strong software and systems engineering with a passion for operating production systems at scale. As an SRE youll be part of the team responsible for our global service fleet. What Youll Do: CloudVision is deployed on Kubernetes across global regions using Spinnaker for our CI/CD pipeline. Our tech stack runs on GKE, using HBase/Hadoop as main distributed database and storage layer, ElasticSearch for powering search data, ClickHouse for fast real time queries of flow data, our own Kafka-based distributed real time stream processing layer for analytics, and TensorFlow for ML analysis. Our monitoring system is built on top of Prometheus, Grafana, Loki, and other OSS tools. As a Senior SRE, youll be responsible for our global CloudVision service fleet. This includes: Build, deploy safely and incrementally and operate critical production systems with focus on scalability, reliability, observability, performance and security. Monitor, support and enhance product deployment experience across services. Build automation to remove toil and efficiently operate production systems. Proactively monitor, respond to, and enhance alerts and set up automated alert handling Create and maintain the incident response runbooks. Build and deploy new systems with scalability, reliability, and observability as primary requirements Triage platform/infrastructural issues and help Arista software engineers in their triages. Engage with 3rd party vendor support. Deploy new systems in a staged manner Write postmortem documents and build solutions to avoid incidents from repeating. Plan and communicate maintenance windows on production systems. Work with Aristas product development teams to identify infrastructural issues that are causing bottlenecks and limitations in their workflows. Design and implement solutions to resolve them. Survey and adopt best practices around infrastructure/platform to maintain secure, scalable and fault-tolerant systems. Implement solutions to scale the systems Implement fault-tolerance and performance to improve availability of the systems Study the design and sufficient implementation details of OSS systems for better triage and fix resolution. Qualifications At least Bachelors in Computer Science or Engineering + 5 years experience, MS Computer Science or Engineering + 5 years experience, or equivalent work experience. Knowledge of one or more of Go, Python, bash shell scripting to be able to implement medium complexity automation workflows. Knowledge of Linux (or UNIX) from administration and debugging perspective Hands-on experience in operating software systems (infrastructure, complex applications etc) at scale Experience in server provisioning (esp from storage and networking perspective). Strong problem solving and software troubleshooting skills Experience with infrastructure-as-code. Desirable to have one/more of the following skills Experience managing databases - eg: PostgreSQL or equivalent RDBMS etc Experience with docker and virtualization technologies Experience managing monitoring stack - Prometheus, Grafana etc Experience managing Artifactory, docker registry etc Experience managing CI/CD systems like GitLab tools, Spinnaker etc Experience with infrastructure-as-code frameworks like Terraform Experience with container orchestration via Kubernetes Additional Information Arista stands out as an engineering-centric company. Our leadership, including founders and engineering managers, are all engineers who understand sound software engineering principles and the importance of doing things right. We hire globally into our diverse team. At Arista, engineers have complete ownership of their projects. Our management structure is flat and streamlined, and software engineering is led by those who understand it best. We prioritize the development and utilization of test automation tools. Our engineers have access to every part of the company, providing opportunities to work across various domains. Arista is headquartered in Santa Clara, California, with development offices in Australia, Canada, India, Ireland, and the US. We consider all our R&D centers equal in stature. Join us to shape the future of networking and be part of a culture that values invention, quality, respect, and fun. Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

mumbai, maharashtra, india

On-site

About Us: Fluent Health is a dynamic healthcare startup revolutionizing how you manage your healthcare and that of your family. The company will provide customers with high-quality, personalized options, credible information through trustworthy content, and absolute privacy. To assist us in our growth journey, we are seeking a highly motivated and experienced Senior Data Engineer to play a pivotal role in future success. Company Website- https://fluentinhealth.com/ Job Description: Were looking for a Senior Data Engineer to lead the design, implementation, and optimization of our analytical and real-time data platform. In this hybrid role, youll combine hands-on data engineering with high-level architectural thinking to build scalable data infrastructure with ClickHouse as the cornerstone of our analytics and data warehousing strategy. Youll work closely with engineering, product, analytics, and compliance teams to establish data best practices, ensure data governance, and unlock insights for internal teams and future data monetization initiatives. Responsibilities: Architecture & Strategy: Own and evolve the target data architecture , with a focus on ClickHouse for large-scale analytical and real-time querying workloads. Define and maintain a scalable and secure data platform architecture that supports various use cases including real-time analytics, reporting, and ML applications. Set data governance and modeling standards , and ensure data lineage, integrity, and security practices are followed. Evaluate and integrate complementary technologies into the data stack (e.g., message queues, data lakes, orchestration frameworks). Data Engineering: Design, develop, and maintain robust ETL/ELT pipelines to ingest and transform data from diverse sources into our data warehouse. Optimize ClickHouse schema and query performance for real-time and historical analytics workloads. Build data APIs and interfaces for product and analytics teams to interact with the data platform. Implement monitoring and observability tools to ensure pipeline reliability and data quality. Collaboration & Leadership: Collaborate with data consumers (e.g., product managers, data analysts, ML engineers) to understand data needs and translate them into scalable solutions. Work with security and compliance teams to implement data privacy, classification, retention, and access control policies . Mentor junior data engineers and contribute to hiring efforts as we scale the team. Qualifications: 5-7 years of experience in Data Engineering , with at least 2-4 years in a Senior or Architectural role. Expert-level proficiency in ClickHouse or similar columnar databases (e.g., BigQuery, Druid, Redshift). Proven experience designing and operating scalable data warehouse and data lake architectures . Deep understanding of data modeling , partitioning , indexing , and query optimization techniques. Strong experience building ETL/ELT pipelines using tools like Airflow, dbt, or custom frameworks. Familiarity with stream processing and event-driven architecture (e.g., Kafka, Pub/Sub). Proficiency with SQL and at least one programming language like Python , Scala , or Java . Experience with data governance , compliance frameworks (e.g., HIPAA, GDPR), and data cataloging tools. Knowledge of real-time analytics use cases and streaming architectures. Familiarity with machine learning pipelines and integrating data platforms with ML workflows. Experience working in regulated or high-security domains like Healthtech , Fintech , or Enterprise SaaS. Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

bengaluru

Work from Office

Meet the Team Cisco Cloud Security Group is at the forefront of developing cloud-delivered security needs and challenges of our customers. The Cloud Security group focuses on developing cloud delivered security solutions (SaaS based) in a platform centric approach. This group was formed a couple of years ago by combining some of existing cloud assets Cisco had with two hugely successful acquisitions - OpenDNS and CloudLock. Our vision is to build the most complex security solutions in a cloud delivered way with utmost simplicity - disrupt industry's thinking around how deep and how broad a security solution can be while keeping it easy to deploy and simple to manage. We are at an exciting stage of this journey and looking for a passionate, innovative and action-oriented engineering leader to build next-gen cloud security solutions like Cloud Firewall, IPS, IDS etc. We are seeking a skilled developer to join our control plane team. The ideal candidate will design, develop, and maintain scalable software solutions that detect anomalies in large-scale data environments. This role requires strong programming skills, experience with machine learning or statistical anomaly detection techniques, and the ability to work collaboratively in a fast-paced environment. Your Impact : Develop, test, and deploy software components for anomaly detection systems. Collaborate with data scientists and engineers to implement machine learning models for anomaly detection. Optimize algorithms for real-time anomaly detection and alerting. Analyze large datasets to identify patterns and improve detection accuracy. Maintain and enhance existing anomaly detection infrastructure. Participate in code reviews, design discussions, and agile development processes. Troubleshoot and resolve issues related to anomaly detection applications. Document development processes, system designs, and operational procedures. Minimum Qualifications: Bachelor's degree in Computer Science, Software Engineering, or a related field. 8+ years of relevant industry experience on development 5+ Strong proficiency in programming languages such as Python, Java, Go. 5+ Experience with big data technologies (e.g., Hadoop, Spark, Flink) and data processing pipelines. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Knowledge of anomaly detection techniques and statistical analysis. Understanding of cloud platforms and Distributed Databases (Snowflake, Clickhouse) with experience in containerization. Hands on and sound knowledge in technologies such as Kafka, Kubernetes, NoSQL. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience working on anomaly detection or cybersecurity systems. Background in data science, statistics, or related fields. Familiarity with monitoring and alerting tools. Experience with CI/CD pipelines and automated testing.

Posted 1 week ago

Apply

18.0 - 23.0 years

35 - 40 Lacs

bengaluru

Hybrid

Director - Engineering - CPAAS, Open Source Contributor, Database Internals] About the Team At Cloud Platform as a Service, our core mission is to architect, develop, and maintain a robust, enterprise-grade data and compute platform. This platform is meticulously crafted using cutting-edge open-source technologies, providing the foundational infrastructure upon which numerous product engineering teams at Nutanix build and deliver their exceptional solutions to our valued customers. Your Role Lead & Grow: Hire, mentor, and develop a high-performing engineering organization of 30+ engineers, including first-line managers and staff engineers. Drive Delivery: Collaborate across geographies to plan and execute end-to-end delivery of critical platform projects. Architect & Innovate: Partner with product and architecture teams to define technical vision, influence product strategy, and ensure robust, scalable designs in line with best practices. Cross-Functional Influence: Work closely with Dev and QA teams to align on priorities and drive shared goals. Community & Open Source: Engage with open-source communities and vendorsleveraging existing projects, contributing enhancements, and steering integrations to meet Nutanixs objectives. What You Will Bring Systems & Architecture: Deep understanding of OS internals, networking, containers (Docker, Kubernetes, service mesh), and distributed systems. Strong experience with Linux Languages: Hands-on experience in one of the programming languages in Go, C/C++, Java or Python at scale. Database & Messaging: Knowledge of distributed OLTP/OLAP databases and queuing/caching systems (e.g., NATS, PostgreSQL, Clickhouse, Redis,Cassandra). Open Source: Experience contributing to or maintaining open-source projects.Demonstrated understanding of open-source distributed databases and streaming systems, including the tradeoffs involved in developing clustered, high-performance, and fault-tolerant system software. Qualifications BS/ MS or PhD in Computer Science, Engineering or Equivalent 18+ Years of experience, 7+ years leading and scaling engineering teams, with a track record of hiring, coaching, and performance management. Proven hands-on technical management Experience working in a high-growth multinational company environment Work Arrangement Hybrid: This role operates in a hybrid capacity, blending the benefits of remote work with the advantages of in-person collaboration. For most roles, that will mean coming into an office a minimum of 3 days per week, however certain roles and/or teams may require more frequent in-office presence. Additional team-specific guidance and norms will be provided by your manager.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

The primary purpose of your role is to design and construct high-performance trading systems and data infrastructure for Nuvama's capital markets operations. Your responsibilities will include building trading execution systems, market data pipelines, backtesting frameworks, and collaborating with traders to develop custom solutions. It is crucial to ensure ultra-low latency execution, high data accuracy, and system uptime to meet the performance metrics and deliver efficient trading solutions. As a qualified candidate, you are required to hold a Bachelor's/Master's degree in Computer Science, Engineering, Mathematics, Physics, or Quantitative Finance. Additionally, you should possess 2-5 years of hands-on experience in quantitative finance or financial technology, with recent exposure to equity markets and trading systems. Technical certifications in AWS, Databricks, or financial industry certifications are preferred qualifications for this role. Your technical competencies should include expertise in programming languages like PySpark, Scala, Rust, C++, and Java, along with proficiency in Python ecosystem tools for quantitative analysis. You must also have experience in data engineering, system design principles, and developing trading systems from scratch. Knowledge of financial markets, trading mechanics, and algorithmic trading strategy development is essential for this position. In terms of behavioral competencies, you should demonstrate technical leadership, innovation, collaboration with stakeholders, and a focus on project execution and delivery. Your ability to understand market dynamics, regulatory requirements, and continuously adapt to market evolution will be critical for success in this role. Moreover, staying current with technology advancements in quantitative finance, data engineering, and trading technology is essential for continuous learning and improvement. Overall, your role will involve designing scalable trading systems, implementing real-time data infrastructure, and collaborating with traders to optimize trading execution and risk management platforms. Your technical expertise, market knowledge, and behavioral competencies will be key to achieving high performance and operational efficiency in Nuvama's capital markets operations.,

Posted 1 week ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About Us At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the worlds largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazines Top Company Cultures list and ranked among the Worlds Most Innovative Companies by Fast Company. We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! Available Locations: Bengaluru About The Department The Growth Engineering team is responsible for building world-class experiences that help the millions of Cloudflare self-service customers get what they need faster, from acquisition and onboarding all the way through to adoption and scale up. Our team is focused on high velocity experimentation and thoughtful optimizations to that experience on Cloudflares properties. This team has a dual mandate, also focusing on evolving our current marketing attribution, customer event ingress and experimentation capabilities that process billions of events across those properties to drive data-driven decision making. As an engineer for the team responsible for Data Capture and Experimentation, your job will be to deliver on those growth-driven features and experiences while evolving our current marketing attribution, consumer event ingress and experimentation setup across these experiences, and partner with many teams on implementations. About The Role We are looking for experienced full-stack engineers to join the Experimentation and Data Capture team. The ideal candidate will have experience working with large-scale applications, familiarity with event-driven data capture, and strong understanding of system design. You must care deeply not only about the quality of your and the team&aposs code, but also the customer experience and developer experience. We have a great opportunity to evolve our current data capture and experimentation systems to better serve our customers. We are also strong believers in dog-fooding our own products. From cache configuration to Cloudflare Access, Cloudflare Workers, and Zaraz, these are all tools in our engineer&aposs tool belt, so it is a plus if you have been a customer of ours, even as a free user. What Youll Do The Experimentation and Data Capture Engineering Team will be responsible for the following: Technical delivery for Experimentation and Data Capture capabilities intended for all of our customer-facing UI properties, driving user acquisition, engagement, and retention through data-driven strategies and technical implementations Collaborate with product, design and stakeholders to establish outcome measurements, roadmaps and key deliverables Own and lead execution of engineering projects in the area of web data acquisition and experimentation Work across the entire product lifecycle from conceptualization through production Build features end-to-end: front-end, back-end, IaC, system design, debugging and testing, engaging with feature teams and data processing teams Inspire and mentor less experienced engineers Work closely with the trust and safety team to handle any compliance or data privacy-related matters Examples Of Desirable Skills, Knowledge And Experience Comfort with building reusable SDKs and UI components with TypeScript/JavaScript required, comfort/familiarity with other languages (Go/Rust/Python) a plus. Experience building with high-scale serverless systems like Cloudflare Workers, AWS Lambda, Azure Functions, etc. Design and execute A/B tests and experiments to optimize for business KPIs, including user onboarding, feature adoption, and overall product experience. Create reusable components for other developers to leverage. Experience with publishing-to and querying-from data lake/warehouse products like Clickhouse, Apache Iceberg, to evaluate experiments. Familiarity with commercial analytics systems (Adobe Analytics, Google BigQuery, etc) a plus. Implement tracking and attribution systems to understand user behavior and measure the effectiveness of growth initiatives. Familiarity with event driven architectures, high-scale data processing, issues that can occur and how to protect against them. Familiarity with global data privacy requirements governed by laws like GDPR/CCPA/etc, and the implications for data capture, modeling, and analysis. Desire to work in a very fast-paced environment. What Makes Cloudflare Special Were not just a highly ambitious, large-scale technology company. Were a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo : Since 2014, we&aposve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflares enterprise customers--at no cost. Athenian Project : In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&aposve provided services to more than 425 local government election websites in 33 states. 1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Heres the deal - we dont store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers. Sound like something youd like to be a part of Wed love to hear from you! This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license. Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&aposs, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at [HIDDEN TEXT] or via mail at 101 Townsend St. San Francisco, CA 94107. Show more Show less

Posted 1 week ago

Apply

6.0 - 8.0 years

0 Lacs

hyderabad, telangana, india

On-site

Requirements 6+ years of hands-on experience as a Full Stack (Backend heavy) engineer Strong proficiency in Python (Flask, FastAPI, Django, etc.) For the frontend- React, Javascript, Typescript, HTML, CSS. Solid experience with microservices architecture and containerized environments (Docker, Kubernetes, EKS) Proven expertise in REST API design , rate limiting, security, and performance optimization Familiarity with NoSQL & SQL databases (MongoDB, PostgreSQL, DynamoDB, ClickHouse) Experience with cloud platforms (AWS, Azure, or GCP AWS preferred) CI/CD and Infrastructure as Code (Jenkins, GitHub Actions, Terraform) Exposure to distributed systems, data processing, and event-based architectures (Kafka, SQS) Excellent written and verbal communication skills Certifications in System Design or Cloud Architecture Experience working in agile, distributed teams with a strong ownership mindset Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

30 - 45 Lacs

hyderabad, pune

Work from Office

Hello Candidate, Greetings from Hungry Bird IT Consulting Services Pvt Ltd. We are hiring Lead Data Engineer for our client. Job Title: Lead Data Engineer Location: Hyderabad Job Type: Full-Time Experience: 5+ years (2+ years in a leadership role) Role Overview: We are looking for a highly skilled and experienced Lead Data Engineer to oversee the design, development, and management of scalable data infrastructure and pipelines. You will work closely with cross-functional teams, lead a team of engineers, and drive best practices in data engineering using modern cloud and big data technologies like Databricks , AWS , Airflow , and FastAPI . Key Responsibilities: Lead the design, development, and optimization of robust ETL pipelines and data workflows Architect scalable and secure data solutions using Databricks and AWS Manage and optimize data storage using AWS S3 , following data lake and warehouse best practices Implement centralized data governance using Databricks Unity Catalog Develop data APIs using FastAPI for internal and external integrations Use Airflow for workflow orchestration and automation Work with columnar databases like Rockset , Clickhouse , and CrateDB Ensure data quality, security, and compliance across all systems Collaborate with data scientists and analysts to meet analytical and ML needs Mentor and guide junior data engineers, promoting code quality and reusability Stay updated with new tools and technologies to enhance data infrastructure Contribute to cross-functional projects with technical leadership Required Qualifications: Bachelors/Masters in Computer Science, Engineering, or a related field 5+ years of experience in data engineering, with 2+ years in a lead role Strong expertise in Python , PySpark , and SQL Hands-on experience with Databricks , AWS (S3, IAM, Lambda, ECR) Experience with Airflow , FastAPI , and columnar databases Deep understanding of ETL pipelines , data modeling , and data warehousing Familiarity with Git, CI/CD, and Agile methodologies Knowledge of data governance and compliance standards Preferred Qualifications: Experience with real-time data processing/streaming Familiarity with MLOps and machine learning workflows Certifications in Databricks and/or AWS Exposure to data mesh or data fabric architectures Understanding of metadata management and data lineage Tech Stack: Languages & Frameworks: Python, PySpark, SQL, FastAPI Cloud: AWS (S3, IAM, Lambda, ECR) Big Data: Databricks Orchestration: Airflow Databases: Rockset, Clickhouse, CrateDB Governance: Databricks Unity Catalog (Interested candidates can share their CV with us at or reach us at aradhana@hungrybird.in ) PLEASE MENTION THE RELEVANT POSITION IN THE SUBJECT LINE OF THE EMAIL. Example: KRISHNA, HR MANAGER, 7 YEARS, 20 20DAYS NOTICE. Name: Position applying for: Total experience: Notice period: Current Salary: Expected Salary: Thanks and Regards Aradhana +919959417171

Posted 1 week ago

Apply

6.0 - 8.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Location: Hyderabad Job Requirements Full Stack Engineer We are seeking an experienced Full Stack Engineer with strong backend expertise and proven ability to build scalable, secure, and high-performing applications. The ideal candidate should bring technical depth along with collaboration skills to thrive in agile environments. Key Requirements: Minimum 6+ years of professional experience as a Full Stack Engineer, with emphasis on backend development. Strong proficiency in Python frameworks such as Flask, FastAPI, or Django . Frontend development experience using React, JavaScript, TypeScript, HTML, and CSS . Solid understanding of microservices and containerized environments (Docker, Kubernetes, EKS). Expertise in REST API design with focus on performance, security, and scalability. Hands-on experience with SQL and NoSQL databases (PostgreSQL, MongoDB, DynamoDB, ClickHouse). Working knowledge of AWS ; exposure to Azure or GCP is an advantage. Experience in CI/CD pipelines and Infrastructure as Code (Jenkins, GitHub Actions, Terraform). Familiarity with distributed systems, event-driven architecture, and tools like Kafka or SQS . Excellent communication, problem-solving, and teamwork skills. Cloud or System Design certifications will be considered a plus. Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

bengaluru, karnataka, india

Remote

About Sibros Technologies Who We Are Sibros is accelerating the future of SDV excellence with its Deep Connected Platform that orchestrates full vehicle software update management, vehicle analytics, and remote commands in one integrated system. Adaptable to any vehicle architecture, Sibros platform meets stringent safety, security, and compliance standards, propelling OEMs to innovate new connected vehicle use cases across fleet management, predictive maintenance, data monetization, and beyond. Learn more at www.sibros.tech. Our Mission Our mission is to help our customers get the most value out of their connected devices. Follow us on LinkedIn | Youtube | Instagram About The Role Job Title: Software Engineer II Experience: 3 - 6 At Sibros, we are building the foundational data infrastructure that powers the software-defined future of mobility. One of our most impactful products Deep Logger enables rich, scalable, and intelligent data collection from connected vehicles , unlocking insights that were previously inaccessible. Our platform ingests high-frequency telemetry , diagnostic signals, user behavior, and system health data from vehicles across the globe. We transform this into actionable intelligence through real-time analytics , geofence-driven alerting , and predictive modeling for use cases like trip intelligence, fault detection, battery health , and driver safety . Were looking for a Software Engineer II , to help scale the backend systems that support Deep Loggers data pipelinefrom ingestion and streaming analytics to long-term storage and ML model integration . Youll play a key role in designing high-throughput, low-latency systems that operate reliably in production, even as data volumes scale to billions of events per day. What Youll Do Understand the limitations of various cloud native/open sourced, streaming/data related solutions, and leverage them to build data driven applications such as trip processing, geofence alerting, vehicle components health prediction, etc. Develop and optimize streaming data pipelines using Apache Beam, Flink, and Google Cloud Dataflow. Collaborate closely with firmware engineers, frontend engineers and product owners to build highly scalable solutions that provide fleet insights. Wear multiple hats in a fast-paced startup environment, adapting to new challenges and responsibilities. Understand customer requirements and convert them into engineering ideas to build innovative real-time data applications. What You Should Know Over 3+ years of experience in software engineering. Excellent understanding of computer science fundamentals, data structures, and algorithms. Strong track record in designing and implementing large scale distributed systems. Willingness to wear multiple hats and adapt to a fast-paced startup environment. Proficiency in writing production-grade code in GoLang or Java. Hands-on experience with Kubernetes, Lambda, and cloud-native services, preferably in Google Cloud or AWS environments. Experience in Apache Beam/Flink for building and deploying large-scale data processing applications. Passionate about the vision and mission of the company, and interested in solving challenging problems in the automotive IoT domain. Preferred Qualifications Experience designing and building systems for large-scale IoT deployments, including data collection, processing, and analysis. Experience with streaming and batch processing models using open-source tools such as Apache Kafka, Flink, and Beam. Expertise in building cloud-native solutions using Google Cloud, AWS, or Azure. Experience in working with large scale time-series databases such as Apache Druid, Clickhouse. Experience in working with streaming processing open source tools such as Apache Kafka, Flink. What We Offer Competitive compensation package with performance incentives. A dynamic work environment with a flat hierarchy and the opportunity for rapid career advancement. Collaborate with a dynamic team thats passionate about solving complex problems in the automotive IoT space. Access to continuous learning and development opportunities. Flexible working hours to accommodate different time zones. Comprehensive benefits package including health insurance and wellness programs. A culture that values innovation and promotes a work-life balance. Equal Opportunity Employer We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, or disability status. This job description provides a comprehensive outline designed to attract individuals experienced in inside sales and looking to advance their career in a dynamic and growing industry. Adjustments can be made based on the specific needs of your team and changes in business strategies. Show more Show less

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

karnataka

On-site

As a skilled engineer, you will play a crucial role in designing robust and scalable data platforms and microservices. Your primary responsibility will be ensuring that architectural and coding best practices are adhered to in order to deliver secure, reliable, and maintainable systems. You will actively engage in debugging complex system issues and work collaboratively with cross-functional teams to define project scope, goals, and deliverables. In this role, you will be tasked with managing priorities, effectively allocating resources, and ensuring the timely delivery of projects. Additionally, you will have the opportunity to lead, mentor, and cultivate a team of 10-15 engineers, fostering a collaborative and high-performance culture within the organization. Conducting regular one-on-one sessions, providing career development support, and managing performance evaluations will also be part of your responsibilities. Driving innovation is a key aspect of this position, where you will be expected to identify new technologies and methodologies to enhance systems and processes. You will be responsible for defining and tracking Key Performance Indicators (KPIs) to measure engineering efficiency, system performance, and team productivity. Collaborating closely with Product Managers, Data Scientists, Customer Success engineers, and other stakeholders to align efforts with business goals will also be essential. In addition to the above responsibilities, you will partner with other engineering teams to deliver cross-cutting features, contributing to the overall success of the organization. Requirements: - Possess 10-15 years of relevant experience. - Demonstrate excellent leadership, communication, and interpersonal skills to effectively manage a diverse and geographically distributed team. - Have hands-on experience in building and scaling data platforms and microservices-based products. - Proficient in programming languages commonly used for backend and data engineering such as Java, Python, and Go. - Operational experience with tools like Kafka/Kafka Streams, Spark, Databricks, Apache Iceberg, Apache Druid, and Clickhouse. - Familiarity with relational and NoSQL databases. If you are looking for a challenging yet rewarding opportunity to contribute to the growth and success of Traceable AI, this position could be the perfect fit for you.,

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

bengaluru

Work from Office

Job Description: Job Title- QA Engineer, AVP Location- Bangalore, India Role Description Were looking for a QA Engineer inForeign Exchange Technology of the Deutsche Bank. Global Foreign Exchange (GFX)is a vital part of the DB Investment Bank that provides our clients with many ways to manage their currency risk.Deutsche Bank has been ranked the Overall FX Market Leader by market share in the 2022 Euromoney FX Survey.Deutsche Bank ranked top in 50% of all categories in the survey, including No 1 for overall market share, swaps (unadjusted), options, overall electronic FX, emerging market volume (unadjusted swaps), precious metals, banks and non-financial corporates. GFX heavily relies on its technology to stay ahead of the competition. Our products are used by clients, trading desks, sales and operations. They provide connectivity with brokers, exchanges and clearing houses. The development of our products gives engineers a unique ability to learn business, work with big data and analytics. We use MongoDB, ClickHouse, Kafka, Redis, Ignite, gRPC,Spark, Tableau and other technologies to build our platform. We build cloud-ready solutions and host them on-premises (GCP Anthos) and public clouds (GCP GKE). For writing Frontends we leverage Autobahn Platform, that is used by thousands external and internal users. Your key responsibilities Set up QA processes and test automation approaches from scratch and improve the existing processes/approaches Plan and execute full set of test activities, coordinating complex end-to-end testing across multiple systems Drive implementation of testable architecture for developed applications Develop and extend in-house testing automation tools and test frameworks Take initiative and lead tooptimizea length of test cycle and therefore time to market for new functionality. Your skills and experience 8+ years of professionalexperienceworking in Quality Assurance area,test analysis skills,ability to design test cases using different test techniques Proven track record in building QA process and test automation approaches on large scale Experience in testing of complex distributed systems, testing at different levels Strong technical background, experience with SQL, Unix commands Experience programming in at least one of the following languages: Java, Kotlin, Python, or JS/TS Soft Skills Problem solving Ability to take ownership of a task until its completion, rather than just coding or testing Team player, open to collaboration, ability to work in distributed team Good communication skills, ability to work with business and support teams, spoken/written English Eagerto learn about new technology and gain new skills as required Attention to details, discipline

Posted 1 week ago

Apply

4.0 - 6.0 years

30 - 35 Lacs

bengaluru

Work from Office

At LeadSquared, we like being up to date with the latest technology and utilizing the trending tech stacks to build our product. By joining the engineering team, you get to work first-hand with the latest web and mobile technologies and solve the challenges of scale, performance, security, and cost optimization. Our goal is to build the best SaaS platform for sales execution in the industry and what better place than LeadSquared for an exciting career? The Role: We are looking for developers who have experience building high-performance microservices using Golang, Redis and other AWS Services. The role involves understanding business requirements and developing a solution that is secure, scalable, high performing and testable. Must have: 4-6 years of experience in building high performance APIs and services preferably with Golang. Working experience with Data Streams (Kafka or AWS kinesis) Experience in working on large scale enterprise applications following best practices. Must have strong debugging and troubleshooting skills with clear understanding of how to design and develop reusable, maintainable and debuggable applications. GIT experience is a prerequisite. Good to have: Working experience on Kubernetes and microservice. Experience with OLAP DB/DW like Clickhouse/Redshift. Working experience in building and deployment applications on AWS platform

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

karnataka

On-site

As a Data Platform Engineering Manager at Traceable AI, you will be responsible for designing robust and scalable data platforms and microservices. It will be your duty to ensure that architectural and coding best practices are followed to deliver secure, reliable, and maintainable systems. You will actively participate in debugging complex system issues and work with cross-functional teams to define project scope, goals, and deliverables. Managing priorities, allocating resources effectively, and ensuring timely project delivery will be crucial aspects of your role. In this position, you will lead, mentor, and grow a team of 10-15 engineers, fostering a collaborative and high-performance culture. Conducting regular one-on-ones, providing career development support, and managing performance evaluations will be part of your responsibilities. You will be driving innovation by identifying new technologies and methodologies to improve systems and processes. Defining and tracing KPIs to measure engineering efficiency, system performance, and team productivity will also fall under your purview. Collaboration with Product Managers, Data Scientists, Customer Success engineers, and other stakeholders to define requirements and align efforts with business goals will be essential. Additionally, partnering with fellow engineering teams to deliver cross-cutting features will be a key aspect of this role. To be successful in this role, you should have 10-15 years of experience and excellent leadership, communication, and interpersonal skills. You must possess the ability to manage a diverse, geographically distributed team. Hands-on experience in building and scaling data platforms and microservices-based products is required. Proficiency in programming languages commonly used for backend and data engineering such as Java, Python, and Go is essential. Operational experience with technologies like Kafka, Spark, Databricks, Apache Iceberg, Apache Druid, Clickhouse, as well as relational and NoSQL databases is also necessary. Join Traceable AI and be part of a dynamic team that is at the forefront of innovation in data platform engineering.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

In this role, you will have a significant impact on the design, development, and optimization of scalable data products in the Telecom Analytics domain. Collaborating with diverse teams, you will implement AI-driven analytics, autonomous operations, and programmable data solutions. This position provides an exciting opportunity to work with cutting-edge Big Data and Cloud technologies, enhance your data engineering skills, and contribute to the advancement of Nokia's data-driven telecom strategies. If you have a passion for creating innovative data solutions, mastering cloud and big data platforms, and thrive in a fast-paced, collaborative environment, then this role is tailored for you! You will play a crucial role in various aspects, including but not limited to: - Managing source data within the Metadata Hub and Data Catalog for effective Data Governance. - Developing and executing data processing graphs using Express It and the Co-Operating System for ETL Development. - Debugging and optimizing data processing graphs using the Graphical Development Environment (GDE) for ETL Optimization. - Leveraging Ab Initio APIs for metadata and graph artifact management for API Integration. - Implementing and maintaining CI/CD pipelines for metadata and graph deployments for CI/CD Implementation. - Mentoring team members and promoting best practices in Ab Initio development and deployment for Team Leadership & Mentorship. You should possess: - A Bachelor's or Master's degree in computer science, Data Engineering, or a related field with at least 8 years of experience in data engineering, focusing on Big Data, Cloud, and Telecom Analytics. - Hands-on expertise in Ab Initio for data cataloguing, metadata management, and lineage. - Skills in data warehousing, OLAP, and modeling using BigQuery, Clickhouse, and SQL. - Experience with data persistence technologies such as S3, HDFS, and Iceberg. - Proficiency in Python and scripting languages. Additional experience in the following areas would be beneficial: - Data exploration and visualization using Superset or BI tools. - Knowledge of ETL processes and streaming tools like Kafka. - Background in building data products for the telecom domain and understanding AI and machine learning pipeline integration. At Nokia, we are dedicated to driving innovation and technology leadership across mobile, fixed, and cloud networks. Join us to make a positive impact on people's lives and contribute to building a more productive, sustainable, and inclusive world. We foster an inclusive working environment where new ideas are welcomed, risks are encouraged, and authenticity is celebrated. Nokia offers continuous learning opportunities, well-being programs, support through employee resource groups, mentoring programs, and a diverse team with an inclusive culture where individuals can thrive and feel empowered. We are committed to inclusion and are proud to be an equal opportunity employer. Join our team at Nokia, the growth engine leading the transition to cloud-native software and as-a-service delivery models for communication service providers and enterprise customers. Be part of a collaborative team of dreamers, doers, and disruptors who push boundaries from the impossible to the possible.,

Posted 2 weeks ago

Apply

7.0 - 12.0 years

30 - 35 Lacs

pune

Work from Office

Role Description Were looking for a QA Engineer inForeign Exchange Technology of the Deutsche Bank. Global Foreign Exchange (GFX)is a vital part of the DB Investment Bank that provides our clients with many ways to manage their currency risk.Deutsche Bank has been ranked the Overall FX Market Leader by market share in the 2022 Euromoney FX Survey.Deutsche Bank ranked top in 50% of all categories in the survey, including No 1 for overall market share, swaps (unadjusted), options, overall electronic FX, emerging market volume (unadjusted swaps), precious metals, banks and non-financial corporates. GFX heavily relies on its technology to stay ahead of the competition. Our products are used by clients, trading desks, sales and operations. They provide connectivity with brokers, exchanges and clearing houses. The development of our products gives engineers a unique ability to learn business, work with big data and analytics. We use MongoDB, ClickHouse, Kafka, Redis, Ignite, gRPC,Spark, Tableau and other technologies to build our platform. We build cloud-ready solutions and host them on-premises (GCP Anthos) and public clouds (GCP GKE). For writing Frontends we leverage Autobahn Platform, that is used by thousands external and internal users. Your key responsibilities Set up QA processes and test automation approaches from scratch and improve the existing processes/approaches Plan and execute full set of test activities, coordinating complex end-to-end testing across multiple systems Drive implementation of testable architecture for developed applications Develop and extend in-house testing automation tools and test frameworks Take initiative and lead tooptimizea length of test cycle and therefore time to market for new functionality. Your skills and experience 8+ yearsof professionalexperienceworking in Quality Assurance area,test analysis skills,ability to design test cases using different test techniques Proven track record in building QA process and test automation approaches on large scale Experience in testing of complex distributed systems, testing at different levels Strong technical background, experience with SQL, Unix commands Experience programming in at least one of the following languages: Java, Kotlin, Python, or JS/TS Soft Skills Problem solving Ability to take ownership of a task until its completion, rather than just coding or testing Team player, open to collaboration, ability to work in distributed team Good communication skills, ability to work with business and support teams, spoken/written English Eagerto learn about new technology and gain new skills as required Attention to details, discipline

Posted 2 weeks ago

Apply

7.0 - 12.0 years

35 - 40 Lacs

mumbai

Work from Office

Role Description Risk and Portfolio Management (RPM) is looking for extremely bright candidates with a Finance/Risk and Coding background, to work in a new 1st Line of defense Distressed Asset Management team. The role is categorized as Risk & Portfolio Manager and would suit a well-organized and collaborative individual looking to further develop their credit risk and portfolio management skills in a challenging, fast paced environment, where the team and individual can make significant contribution for the global Corporate Bank - Trade Finance and Lending Business. Your key responsibilities Develop strategic analytical tools, data driven applications and reporting in direct alignment with requirements from various revenue-generating desks within TF&L Research & modelling: Lead quantitative research into the portfolio to steer proactive portfolio management of the book Automation & innovation: Translate ideas into machine-assisted solutions and discover automation potential that improves efficiency and control within the banks internal environment Creating a Centralized Database for distressed assets among others that consolidates fragmented data sources into a reliable database with many automated data pipelines Facilitate effective use of capital and resources via the establishment of new working tools (platforms) across TF&L portfolios Monitoring key financial and regulatory metrics such as Total Capital Demand, RWA, CRD4, Return on Equity, SVA to ensure alignment with defined targets and strategic objectives for TF&L Increase transparency and real time capital impact simulation at Sector, Country, Client or even at transaction level for Non-Performing and Sub performing exposures Ensure compliance with relevant and applicable local and global regulatory and policy requirements Your skills and experience Technical Skills: Advanced degree (or equivalent experience) in a quantitative field - finance, math, physics, computer science, econometrics, statistics or engineering Strong programming skills with experience in Python & SQL, demonstrably gained in the financial services industry Solid grasp of Statistic/Econometric concepts including machine learning (standard regression, classification models and time-series techniques) Proficiency in code management via git and standard IDEs/editors and modern development best practices Data engineering, willingness to interact with APIs and large Databases (SQL/Clickhouse) Web & visualisation: Experience building lightweight UIs, analytics dashboards using frameworks such as Flask/FastAPI and Plotly-Dash or comparable packages (e.g. ReportLab) Experience in HTML, as well es experience with networking in flask or experience in packages like with plotly/dash/reportlab Behavioral Skills: (e.g. communication skills) Team spirit and willingness to work in a dynamic environment Openness to adapt new technologies and to find a way to add value with further progressing automation levels Ability to handle multiple and often competing tasks under tight deadline with a focus on the detail Ability to explain complex ideas clearly to both technical and non-technical stakeholders Able to think and work independently while supporting team goals and objectives Decisiveness and performance oriented Demonstrated flexibility and willingness to work for a team which is based in Frankfurt

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

You are an experienced Python developer with a focus on the stock market domain, having a minimum of 4 years of relevant experience. As a Python developer, you will be responsible for various data engineering tasks using your expertise in Python programming. Your qualifications include a Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. You should have at least 4 years of hands-on experience in Python programming specifically for data engineering tasks. Proficiency in web server technologies using frameworks like Flask, FastAPI, and Django is essential for this role. Additionally, you should have expertise in creating and managing workflows using Apache Airflow. A strong knowledge of SQL is required, along with experience in query optimization and handling large datasets. You should be well-versed in data warehousing concepts, including schema design and star/snowflake schemas. Practical experience with cloud platforms such as GCP, AWS, or Azure is preferred. Familiarity with modern database systems like ClickHouse, BigQuery, Snowflake, or Redshift is also advantageous. Your responsibilities will include data modeling, pipeline automation, and monitoring. Experience with version control tools like Git is necessary. Excellent problem-solving and communication skills are essential for this role. Preferred qualifications include experience in distributed data processing frameworks like Spark, knowledge of NoSQL databases such as MongoDB or Cassandra, and familiarity with containerization and orchestration tools like Docker and Kubernetes. Prior experience with BI tools such as Superset, Tableau, or Power BI is a plus. Knowledge of machine learning pipelines and integration will be beneficial. Please note that the interview will be conducted in person at our fort office in Mumbai. Candidates are required to bring their laptops for the interview. This is a full-time, permanent position with benefits including Provident Fund. The work schedule is a day shift from Monday to Friday with the work location being in person.,

Posted 2 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Engineer at our company based in Hyderabad, India, you will be responsible for leading the design, development, and maintenance of data pipelines and ETL processes. Your role will involve architecting and implementing scalable data solutions using Databricks and AWS, optimizing data storage and retrieval systems using Rockset, Clickhouse, and CrateDB, and developing data APIs using FastAPI. Furthermore, you will orchestrate and automate data workflows using Airflow, collaborate with data scientists and analysts, and ensure data quality, security, and compliance across all data systems. Mentoring junior data engineers, evaluating and implementing new data technologies, and participating in cross-functional projects will also be part of your responsibilities. Additionally, managing and optimizing data storage solutions using AWS S3 and implementing Databricks Unity Catalog for centralized data governance and access control will be crucial aspects of your role. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with a minimum of 5 years of experience in data engineering, including 2-3 years in a lead role. Proficiency in Python, PySpark, SQL, Databricks, AWS cloud services, Airflow, FastAPI, and columnar databases like Rockset, Clickhouse, and CrateDB is essential. You should also have a solid understanding of data modeling, data warehousing, ETL processes, version control systems (e.g., Git), and CI/CD pipelines. Strong problem-solving skills, the ability to work in a fast-paced environment, excellent communication skills, and knowledge of data governance, security, and compliance best practices are also required. Experience in designing and implementing data lake architectures using AWS S3 and familiarity with Databricks Unity Catalog or similar data governance tools are preferred. In terms of skills and experience, you should be proficient in the following tech stack: Databricks, Python, PySpark, SQL, Airflow, FastAPI, and AWS services such as S3, IAM, ECR, and Lambda, as well as Rockset, Clickhouse, and CrateDB. Working with us will provide you with various benefits, including the opportunity to work on business challenges from global clientele, self-development opportunities, sponsored certifications, tech talks, industry events, retirement benefits, flexible work hours, and a supportive work environment that encourages exploring passions beyond work. This role offers an exciting opportunity for you to contribute to cutting-edge solutions and advance your career in a dynamic and collaborative environment.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a skilled Full Stack Developer at our company, you will be an integral part of the Product Engineering team. Your main responsibilities will include designing and developing robust applications, working closely with teams across different locations to deliver high-quality solutions within specified timelines. It is essential to keep yourself updated with the latest technologies and design principles to excel in this role. Your day-to-day tasks will involve designing and developing technical solutions based on specific requirements, building and maintaining enterprise-grade SaaS software using Agile methodologies, contributing to performance tuning and optimization efforts, creating and executing unit tests for product components, participating in peer code reviews, and ensuring high quality, scalability, and timely project completion. You will primarily be working with technologies such as Golang/Core Java, J2EE, Struts, Spring, client-side scripting, Hibernate, and various databases to build scalable core-Java applications, web applications, and web services. To be successful in this role, you should have a Bachelor's degree in Engineering, Computer Science, or equivalent experience, a solid understanding of data structures, algorithms, and their applications, hands-on experience with Looker APIs, dashboards, and LookML, strong problem-solving skills, and analytical reasoning. You must also have experience in building microservices with Golang/Spring Boot (Spring Cloud, Spring Data), developing and consuming REST APIs, profiling applications, working with front-end frameworks like Angular or Vue, and be proficient in the Software Development Life Cycle (SDLC). Additionally, familiarity with basic SQL queries and experience with Java Spring Boot, Kafka, SQL, Linux, Apache, and Redis is required. Experience with AWS cloud technologies (Go, Python, MongoDB, Postgres, ClickHouse) will be considered a plus. Excellent written and verbal communication skills are also essential for this role.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

As a Senior Full-Stack Developer (AI & SaaS) at our company located in Indore, you will play a crucial role in leading the development of Nebverse, a next-gen AI-powered ERP system. Your primary responsibilities will include architecting, developing, and scaling our SaaS-based ERP system that is integrated with DeepSeek AI for automation in HR, sales, and financial operations. To excel in this role, you must possess expertise in AI, cloud architecture, and scalable SaaS solutions. You will be leading a team of interns and junior developers to create a cutting-edge AI-driven ERP system from the ground up. Your key responsibilities will involve leading the end-to-end development of Nebverse SaaS ERP, architecting and implementing scalable APIs using Go Fiber/FastAPI, working on AI-driven automation for various functions, integrating DeepSeek AI for text, voice, and video interactions, implementing facial recognition and emotion detection for smart AI interactions, optimizing databases like PostgreSQL, Redis, and ClickHouse for high performance, deploying and managing applications on cloud platforms like AWS, Azure, or GCP, ensuring security, multi-tenancy, and cloud scaling for Nebverse, as well as training and mentoring interns on best development practices. In terms of technical skills, you should be proficient in backend technologies like Go Fiber/FastAPI, frontend technologies like React/Next.js, AI & NLP tools such as DeepSeek AI and OpenAI, databases like PostgreSQL, Redis, ClickHouse, DevOps & Cloud tools including AWS/Azure/GCP, Docker, Kubernetes, security & authentication mechanisms like OAuth, JWT, Role-Based Access Control, and version control & CI/CD tools like GitHub, GitLab, Jenkins. Additionally, soft skills such as strong leadership, mentoring abilities, problem-solving skills, analytical thinking, effective communication, documentation skills, and the ability to manage and collaborate with interns/junior developers will be essential for success in this role. If you are excited about working on cutting-edge AI & SaaS technologies, being part of a revolutionary ERP system, and leading and mentoring future tech talents, we encourage you to apply by sending your resume and portfolio to hr@qualitywebs.in with the subject line: Senior Developer - Nebverse. Join us for a competitive salary and growth opportunities.,

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About Us At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the worlds largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazines Top Company Cultures list and ranked among the Worlds Most Innovative Companies by Fast Company. We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! Available Locations: Bengaluru About The Role We are looking for a talented Distributed Systems Engineer to join the Data Localization team. The team is building technologies that allow our customers to harness Cloudflare&aposs massive and performant network to meet their federal Data Sovereignty standards as well as their internal regionalization policies. You will work on a range of microservices written mainly in Rust and Go. Technologies we use include Go, PostgreSQL, Docker, Kubernetes, Clickhouse and the usual Unix/Linux tools and workflows. We strive to build reliable, fault-tolerant systems that can operate at Cloudflares scale. Role Requirements (Must-Have Skills) 3+ years of relevant professional experience with a technology company Experience writing high-quality code in one or more system programming languages (Go, C/C++) Experience building and operating distributed systems Experience with modern Unix/Linux development and runtime environments Excellent debugging skills and attention to details Good understanding of TCP/IP and networking in general Examples Of Desirable Skills, Knowledge And Experience. Experience working with Go and/or Rust Working knowledge of SQL and relational databases such as PostgreSQL Designing and building APIs What Makes Cloudflare Special Were not just a highly ambitious, large-scale technology company. Were a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo : Since 2014, we&aposve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflares enterprise customers--at no cost. Athenian Project : In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&aposve provided services to more than 425 local government election websites in 33 states. 1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Heres the deal - we dont store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers. Sound like something youd like to be a part of Wed love to hear from you! This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license. Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&aposs, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at [HIDDEN TEXT] or via mail at 101 Townsend St. San Francisco, CA 94107. Show more Show less

Posted 2 weeks ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies