Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You will be joining a Bangalore/ San Francisco based networking startup that is focused on enhancing network observability and co-pilot systems to increase network reliability and decrease response time for customers. The founding team has a combined 45 years of experience in the networking industry. In this role as a Web Backend Engineer - SDE-2, you will be instrumental in the design, development, and maintenance of the back-end systems and APIs that drive our network observability and co-pilot platform. Your responsibilities will include creating scalable, secure, and high-performance web services that meet the demanding needs of enterprise clients. Your key responsibilities will involve designing and developing robust and scalable back-end APIs with low latency response time by utilizing appropriate technologies. You will also be implementing enterprise-grade authentication and authorization mechanisms to ensure platform security and integration with enterprise clients. Additionally, you will work on integrating all APIs with an API Gateway to enforce security policies, manage traffic, monitor performance, and maintain fine-grained control. Furthermore, you will be responsible for ensuring compliance with third-party audits (SOC2, ISO 27001), implementing security best practices, and designing back-end systems suitable for deployment using CI/CD pipelines to facilitate smooth updates and feature deployment. Utilizing Application Performance Monitoring (APM), you will analyze performance insights, identify bottlenecks, and implement optimizations proactively. It will also be your duty to design and implement access controls and data protection mechanisms to safeguard customer data and ensure regulatory compliance. Moreover, as part of your role, you will mentor and guide junior engineers, conduct code reviews, and contribute to the growth of the team. To be successful in this role, you should hold a Bachelor's or Master's degree in Computer Science or a related field and possess 4 to 7 years of experience in building scalable back-end web services. You should have a strong command of at least one major back-end programming language (such as Python, Java, Go, Rust) and one or more web frameworks. Experience with RESTful or GraphQL, gRPC, enterprise-grade authentication and authorization mechanisms, API Gateways, security protocols, CI/CD tools, monitoring systems, and database systems is essential. Additionally, knowledge of architectural design patterns, domain-driven design, micro-services, and excellent problem-solving and analytical skills will be beneficial for this role.,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Backend Engineer, you will be an integral part of our dynamic team at the forefront of cutting-edge technology. Your deep-rooted expertise in computer science fundamentals will play a crucial role in developing innovative solutions. You will be responsible for architecting, refining, and escalating the capabilities of complex backend systems using Python. Your focus will be on efficiency, durability, and scale, ensuring peak performance and unwavering reliability of applications. Your key responsibilities will include elevating application performance by optimizing for speed, scalability, and resource allocation. You will forge robust methodologies to manage high concurrency and vast data volumes, setting new industry benchmarks. Collaborating closely with engineering and product peers, you will crystallize requirements into resilient, scalable architectures. Your proficiency with advanced storage solutions and databases like Redis, PostgreSQL, and ClickHouse will be crucial in enhancing system integrity. You will champion coding excellence, testing rigor, and deployment precision, driving best practices across the development lifecycle. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A minimum of 6 years of experience in backend development with Python in a production environment is required. Your proven experience in scaling compute and I/O-intensive applications, strong foundation in computer science, and deep understanding of data structures, algorithms, and system design principles are essential. Experience in handling concurrent requests at scale and optimizing large-scale systems for performance and reliability is a must. Familiarity with database technologies such as Redis, PostgreSQL, and ClickHouse is also expected. Any experience in the financial sector, particularly in developing fintech applications or systems, will be considered a plus. A solid understanding of the software development life cycle, continuous integration, and continuous delivery (CI/CD) practices is required. Excellent problem-solving abilities and strong communication skills will be essential for success in this role.,
Posted 2 days ago
5.0 - 7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us: We are a fast-growing Direct-to-Consumer (D2C) company revolutionizing how customers interact with our products. Our data-driven approach is at the core of our business strategy, enabling us to make informed decisions that enhance customer experience and drive business growth. We&aposre looking for a talented Senior Data Engineer to join our team and help shape our data infrastructure for the future. Role Overview: As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Key Responsibilities: ? Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms ? Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources ? Build and support data pipelines for reporting, analytics, and machine learning applications ? Implement and manage streaming data solutions using Kafka and other technologies ? Design and optimize database schemas and data models in Click House and other databases ? Develop and maintain data workflows using Apache Airflow and similar orchestration tools ? Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks ? Collaborate with data scientists to implement ML infrastructure for model training and deployment ? Ensure data quality, reliability, and security across all data platforms ? Monitor data pipelines and implement proactive alerting systems ? Troubleshoot and resolve data infrastructure issues ? Document data flows, architectures, and processes ? Mentor junior data engineers and contribute to establishing best practices ? Stay current with industry trends and emerging technologies in data engineering Qualifications Required ? Bachelor&aposs degree in Computer Science, Engineering, or related technical field (Master&aposs preferred) ? 5+ years of experience in data engineering roles ? Strong expertise in AWS and/or GCP cloud platforms and services ? Proficiency in building data pipelines using modern ETL/ELT tools and frameworks ? Experience with stream processing technologies such as Kafka ? Hands-on experience with ClickHouse or similar analytical databases ? Strong programming skills in Python and experience with PySpark ? Experience with workflow orchestration tools like Apache Airflow ? Solid understanding of data modeling, data warehousing concepts, and dimensional modeling ? Knowledge of SQL and NoSQL databases ? Strong problem-solving skills and attention to detail ? Excellent communication skills and ability to work in cross-functional teams Preferred ? Experience in D2C, e-commerce, or retail industries ? Knowledge of data visualization tools (Tableau, Looker, Power BI) ? Experience with real-time analytics solutions ? Familiarity with CI/CD practices for data pipelines ? Experience with containerization technologies (Docker, Kubernetes) ? Understanding of data governance and compliance requirements ? Experience with MLOps or ML engineering Technologies ? Cloud Platforms: AWS (S3, Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc) ? Data Processing: Apache Spark, PySpark, Python, SQL ? Streaming: Apache Kafka, Kinesis ? Data Storage: ClickHouse, S3, BigQuery, PostgreSQL, MongoDB ? Orchestration: Apache Airflow ? Version Control: Git ? Containerization: Docker, Kubernetes (optional) What We Offer ? Competitive salary and comprehensive benefits package ? Opportunity to work with cutting-edge data technologies ? Professional development and learning opportunities ? Modern office in Mumbai with great amenities ? Collaborative and innovation-driven culture ? Opportunity to make a significant impact on company growth Show more Show less
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You will play a crucial role as a Web Backend Engineer - SDE-2 in our Bangalore/ San Francisco based networking startup. Your main responsibility will be designing, developing, and maintaining the back-end systems and APIs that power our network observability and co-pilot platform. You will need to ensure that the web services you build are scalable, secure, and high-performance to meet the needs of enterprise customers. Your key responsibilities will include designing and implementing robust and scalable back-end APIs with low latency response time using appropriate technologies. You will also be in charge of implementing enterprise-grade authentication and authorization mechanisms to ensure platform security and seamless adoption by enterprise clients. Additionally, you will need to integrate all APIs with an API Gateway to enforce security policies, manage traffic, monitor performance, and ensure fine-grained control. Another important aspect of your role will be ensuring compliance with third-party audits (SOC2, ISO 27001) and implementing security best practices (OWASP Top 10). You will design and implement a back-end system that can be deployed using CI/CD pipelines to enable seamless updates and deployment of new features with minimal disruption. Using Application Performance Monitoring (APM), you will analyze performance insights, identify bottlenecks, and implement necessary optimizations proactively. Moreover, you will design and implement proper access controls and data protection mechanisms to safeguard customer data and ensure compliance with relevant regulations. As a senior member of the team, you will also mentor and guide junior engineers and conduct code reviews. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science or a related field with 4 to 7 years of experience in building scalable back-end web services. You must possess strong proficiency in at least one major back-end programming language (e.g., Python, Java, Go, Rust) and one or more web frameworks. Experience with building and consuming RESTful or GraphQL, gRPC, and implementing enterprise-grade authentication and authorization mechanisms is required. Hands-on experience with API Gateways, a strong grasp of security protocols, CI/CD tools, and monitoring systems, as well as knowledge of database systems and data modeling are also essential. A solid understanding of architectural design patterns, domain-driven design, micro-services, along with excellent problem-solving and analytical skills will be beneficial for this role.,
Posted 3 days ago
0.0 - 3.0 years
0 Lacs
chandigarh
On-site
As a Python Backend Developer at Lookfinity, you will be a part of the Backend Engineering team that is focused on building scalable, data-driven, and cloud-native applications to solve real-world business problems. We are dedicated to maintaining clean architecture, enhancing performance, and designing elegant APIs. Join our dynamic team that is enthusiastic about backend craftsmanship and modern infrastructure. You will be working with a tech stack that includes languages and frameworks such as Python, FastAPI, and GraphQL (Ariadne), databases like PostgreSQL, MongoDB, and ClickHouse, messaging and task queues such as RabbitMQ and Celery, cloud services like AWS (EC2, S3, Lambda), Docker, Kubernetes, data processing tools like Pandas and SQL, and monitoring and logging tools like Prometheus and Grafana. Additionally, you will be utilizing version control systems like Git, GitHub/GitLab, and CI/CD tools. Your responsibilities will include developing and maintaining scalable RESTful and GraphQL APIs using Python, designing and integrating microservices with databases, writing clean and efficient code following best practices, working with Celery & RabbitMQ for async processing, containerizing services using Docker, collaborating with cross-functional teams, monitoring and optimizing application performance, participating in code reviews, and contributing to team knowledge-sharing. We are looking for candidates with 6 months to 1 year of hands-on experience in backend Python development, a good understanding of FastAPI or willingness to learn, basic knowledge of SQL and familiarity with databases like PostgreSQL and/or MongoDB, exposure to messaging systems like RabbitMQ, familiarity with cloud platforms like AWS, understanding of Docker and containerization, curiosity towards learning new technologies, clear communication skills, team spirit, and appreciation for clean code. Additional experience with GraphQL APIs, Kubernetes, data pipelines, CI/CD processes, and observability tools is considered a bonus. In this role, you will have the opportunity to work on modern backend systems, receive mentorship, and have technical growth plans tailored to your career goals. This is a full-time position with a day shift schedule located in Panchkula. Join us at Lookfinity and be a part of our innovative team dedicated to backend development.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Are you ready to power the world's connections If you don't think you meet all of the criteria below but are still interested in the job, please apply. Nobody checks every box - we're looking for candidates who are particularly strong in a few areas and have some interest and capabilities in others. Design, develop, and maintain microservices that power Kong Konnect, the Service Connectivity Platform. Working closely with Product Management and teams across Engineering, you will develop software that has a direct impact on our customers" business and Kong's success. This opportunity is hybrid (Bangalore Based) with 3 days in the office and 2 days work from home. Implement, and maintain services that power high bandwidth logging and tracing services for our cloud platform such as indexing and searching logs and traces of API requests powered by Kong Gateway and Kuma Service Mesh. Implement efficient solutions at scale using distributed and multi-tenant cloud storage and streaming systems. Implement cloud systems that are resilient to regional and zonal outages. Participate in an on-call rotation to support services in production, ensuring high performance and reliability. Write and maintain automated tests to ensure code integrity and prevent regressions. Mentor other team members. Undertake additional tasks as assigned by the manager. 5+ years working in a team to develop, deliver, and maintain complex software solutions. Experience in log ingestion, indexing, and search at scale. Excellent verbal and written communication skills. Proficiency with OpenSearch/Elasticsearch and other full-text search engines. Experience with streaming platforms such as Kafka, AWS Kinesis, etc. Operational experience in running large-scale, high-performance internet services, including on-call responsibilities. Experience with JVM and languages such as Java and Scala. Experience with AWS and cloud platforms for SaaS teams. Experience designing, prototyping, building, monitoring, and debugging microservices architectures and distributed systems. Understanding of cloud-native systems like Kubernetes, Gitops, and Terraform. Bachelors or Masters degree in Computer Science. Bonus points if you have experience with columnar stores like Druid/Clickhouse/Pinot, working on new products/startups, contributing to Open Source Software projects, or working or developing L4/L7 proxies such as Nginx, HA-proxy, Envoy, etc. Kong is THE cloud native API platform with the fastest, most adopted API gateway in the world (over 300m downloads!). Loved by developers and trusted with enterprises" most critical traffic volumes, Kong helps startups and Fortune 500 companies build with confidence allowing them to bring solutions to market faster with API and service connectivity that scales easily and securely. 83% of web traffic today is API calls! APIs are the connective tissue of the cloud and the underlying technology that allows software to talk and interact with one another. Therefore, we believe that APIs act as the nervous system of the cloud. Our audacious mission is to build the nervous system that will safely and reliably connect all of humankind! For more information about Kong, please visit konghq.com or follow @thekonginc on Twitter.,
Posted 3 days ago
2.0 - 5.0 years
3 - 8 Lacs
Chennai
Work from Office
Role & responsibilities Extract, clean, and analyze large data sets using SQL Design and build dashboards and reports using Power BI Knowledge on Python, DBT, GIT, Clickhouse, Postgres & Airflow is optional Translate business problems into data questions and analytics models. Partner with business stakeholders to identify KPIs and track performance metrics. Develop predictive and prescriptive models to support decision-making. Present findings and actionable insights to senior leadership. Ensure data quality, consistency, and governance standards are maintained. Mentor junior analysts or interns as needed. Preferred candidate profile Bachelors or Masters degree in Statistics, Data Science, Computer Science, Mathematics, Economics, or a related field. 2+ years of experience in data analytics, business intelligence, or data science roles. Strong proficiency in SQL, Excel, and data visualization tools (e.g., Power BI). Good understanding of databases, ETL pipelines, and cloud data environments (e.g., AWS, GCP, Azure). AI & ML knowledge will be an added advantage. Excellent communication and storytelling skills Strong problem-solving skills and attention to detail. Interested parties, please forward your full resume to career.india@emiratesline.com
Posted 4 days ago
4.0 - 6.0 years
8 - 13 Lacs
Pune, Ahmedabad, Surat
Work from Office
Role & responsibilities Design, develop, and maintain REST APIs using .NET Core, OpenAPI, and Node.js Apply SOLID principles, design patterns, and best practices across application design Participate in architectural discussions involving SOA, Microservices, Event-driven, and Serverless patterns Integrate secure authentication/authorization via Azure B2C, IdentityServer, KeyCloak or equivalent Implement message-driven systems using RabbitMQ, Azure Service Bus, or Kafka Implement robust micro-frontend solutions using React.js, angularjs, nextjs, Typescript, Webpack, and Storybook Develop secure backend services including Windows Services, JWT-based authentication, and entity/data access layers using Entity Framework, LINQ Architect and maintain real-time features using SignalR, Socket.io, Twilio Work with MongoDB, MS SQL, Redis for scalable data storage solutions Translate UI/UX designs using HTML, CSS, Bootstrap, Tailwind, MUI, Ant Design Manage codebases using Git, Azure Git, GitHub integrated with Azure DevOps/Jenkins CI/CD pipelines Leverage Azure Services such as Functions, Blob Storage, API Management, Web Apps, and Azure AI Services Preferred candidate profile Strong analytical and reasoning abilities Effective communication with technical and non-technical stakeholders Ownership and accountability for deliverables Good documentation and debugging discipline Team-first, mentor mindset Nice To Have Experience in distributed systems or enterprise-scale software Exposure to AI/ML APIs in Azure or OpenAI
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
You will be responsible for helping build production-grade systems based on ClickHouse. This includes providing advice on designing schemas, planning clusters, and working on environments ranging from single node setups to clusters with hundreds of nodes. You will also work on cloud-managed ClickHouse services and infrastructure projects related to ClickHouse. Additionally, you will be involved in improving ClickHouse itself by fixing bugs, enhancing documentation, creating test cases, and studying new usage patterns, ClickHouse functions, and integration with other products. Your tasks will include installation of multiple node clusters, configuration, backup and recovery, and maintenance of ClickHouse databases. Monitoring and optimizing database performance to ensure high availability and responsiveness will be crucial. You will troubleshoot database issues, identify and resolve performance bottlenecks, design and implement database backup and recovery strategies, and develop database security policies and procedures. Collaboration with development teams to optimize database schema design and queries will be essential. You are expected to provide technical guidance and support to development and operations teams. Experience with big data stack components such as Hadoop, Spark, Kafka, Nifi, as well as data science and data analysis will be beneficial. Knowledge of SRE/DevOps stacks, monitoring/system management tools like Prometheus, Ansible, ELK, and version control using git is required. Handling support calls from customers using ClickHouse, which includes diagnosing problems, designing applications, deploying/upgrading ClickHouse, and operations, will also be part of your responsibilities.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
At LeadSquared, we are committed to staying current with the latest technology trends and leveraging cutting-edge tech stacks to enhance our product. As a member of our engineering team, you will have the opportunity to work closely with the newest web and mobile technologies, tackling challenges related to scalability, performance, security, and cost optimization. Our primary objective is to create the industry's premier SaaS platform for sales execution, making LeadSquared an ideal place to embark on an exciting career. The role we are offering is tailored for developers with a proven track record in developing high-performance microservices using Golang, Redis, and various AWS Services. Your responsibilities will include deciphering business requirements and crafting solutions that are not only secure and scalable but also high-performing and easily testable. Key Requirements: - A minimum of 5 years of experience in constructing high-performance APIs and services, with a preference for Golang. - Proficiency in working with Data Streams such as Kafka or AWS Kinesis. - Hands-on experience with large-scale enterprise applications while adhering to best practices. - Strong troubleshooting and debugging skills, coupled with the ability to design and create reusable, maintainable, and easily debuggable applications. - Proficiency in GIT is essential. Preferred Skills: - Familiarity with Kubernetes and microservices. - Experience with OLAP databases/data warehouses like Clickhouse or Redshift. - Experience in developing and deploying applications on the AWS platform. If you are passionate about cutting-edge technologies, eager to tackle challenging projects, and keen on building innovative solutions, then this role at LeadSquared is the perfect opportunity for you to excel and grow in your career.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
faridabad, haryana
On-site
The role of a seasoned Full Stack Engineer with 3-4 years of experience is crucial in a startup environment to contribute significantly to scaling efforts. As a Full Stack Engineer, you will work across the stack, from creating intuitive user interfaces to designing robust backend systems and integrating advanced data solutions. Your influence on key architectural decisions, optimization of performance, and utilization of AI-driven approaches to solve complex problems will be pivotal. If you thrive in a fast-paced setting and are passionate about building scalable products, we are interested in hearing from you. Success in this role will be determined by your ability to deliver high-quality, maintainable, and scalable products capable of handling rapid growth. You will play a key role in ensuring seamless user experiences, solid backend performance, and secure data management. By proactively tackling technical challenges, enhancing code quality, and mentoring junior engineers, you will have a direct impact on both our product offering and the overall team's efficiency. Collaboration is essential as a Full Stack Engineer, working closely with product managers, designers, DevOps engineers, and data analysts to develop features that address real customer needs. Your work will directly influence product evolution, positioning us for long-term success as we expand into new markets, scale existing solutions, and incorporate cutting-edge AI into our applications. **Responsibilities:** **Frontend:** - Develop responsive, intuitive interfaces using HTML, CSS (SASS), React, and Vanilla JS. - Implement real-time features using sockets for dynamic, interactive user experiences. - Collaborate with designers to ensure consistent UI/UX patterns and deliver visually compelling products. **Backend:** - Design, implement, and maintain APIs using Python (FastAPI). - Integrate AI-driven features to enhance user experience and streamline processes. - Ensure code adherence to best practices in performance, scalability, and security. - Troubleshoot and resolve production issues to minimize downtime and enhance reliability. **Database & Data Management:** - Work with PostgreSQL for relational data, focusing on optimal queries and indexing. - Utilize ClickHouse or MongoDB for specific data workloads and analytics needs. - Contribute to the development of dashboards and tools for analytics and reporting. - Apply AI/ML concepts to derive insights from data and improve system performance. **General:** - Utilize Git for version control; conduct code reviews, ensure clean commit history, and maintain robust documentation. - Collaborate with cross-functional teams to deliver features aligned with business goals. - Stay updated with industry trends, especially in AI and emerging frameworks, to enhance the platform. - Mentor junior engineers and contribute to continuous improvement in team processes and code quality. **Qualifications:** **Required:** - 3-4 years of full-stack development experience in a startup or scaling environment. - Proficiency in frontend technologies: HTML, CSS (SASS), React, Vanilla JS. - Strong backend experience with Python (FastAPI). - Solid understanding of relational databases (PostgreSQL) and performance optimization. - Experience with sockets for real-time applications. - Familiarity with integrating AI or ML-powered features. - Strong problem-solving abilities, attention to detail, and effective communication skills. **Ideal:** - Exposure to Webpack, Handlebars, and GCP services. - Experience in building dashboards and analytics tools. - Knowledge of ClickHouse and MongoDB for specialized workloads. - Prior experience with video calls, AI chatbots, or widgets. - Understanding of cloud environments, deployment strategies, and CI/CD pipelines. - Ability to leverage AI/ML frameworks and tools (e.g., TensorFlow, PyTorch) to improve product features. **Preferred but Not Mandatory:** - Advanced experience in AI-driven optimizations like personalized user experiences, predictive analytics, and automated decision-making. - Familiarity with analytics and monitoring tools for performance tracking. - Prior exposure to a high-growth startup environment, meeting rapid iteration and scaling demands. **Our Process:** - Upon shortlisting, you will receive a project assignment with a one-week deadline. - Successful projects will proceed with two rounds of interviews. - If not shortlisted, feedback will be provided to ensure transparency and respect for your time. **Why Join Us ** - Work with cutting-edge technologies, including AI-driven solutions, in a rapidly scaling environment. - Be part of a collaborative and inclusive team valuing impact, ownership, and growth. - Continuous learning and professional development opportunities. - Competitive compensation and benefits aligned with your experience and contributions. If you are passionate about technology, enjoy solving complex problems, and are eager to contribute to the next phase of a scaling product, apply now and be part of our journey!,
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Kochi
Work from Office
Role Overview As a Java Developer, you will be responsible for designing, developing, and maintaining our high-quality Java-based backend applications. You will work closely with our crossfunctional development team to ensure the efficient and reliable delivery of our products and services. Responsibilities Design, develop, and maintain robust and scalable Java backend applications. Collaborate with the development team to analyze requirements and translate them into technical specifications. Write clean, efficient, and well-documented code. Conduct unit testing and integration testing to ensure code quality. Optimize application performance and scalability. Troubleshoot and resolve technical issues. Stay up-to-date with the latest Java technologies and industry trends. Required education Bachelor's Degree Required technical and professional expertise Qualifications At least 3 years of hands-on experience with Java backend development. Strong understanding of object-oriented programming principles and design patterns. Experience working with Cloud Native environments and platforms (e.g., AWS, GCP, Azure). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Experience with data processing frameworks (e.g., Kafka, Clickhouse) is a plus. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Preferred technical and professional experience Preferred Skills Experience with high-volume data processing and distributed systems. Knowledge of microservices architecture. Familiarity with DevOps practices and tools (e.g., CI/CD pipelines, version control). Hands on experience with distributed tracing and application performance monitoring.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a member of SolarWinds, you will be part of a people-first company dedicated to enriching the lives of employees, customers, shareholders, partners, and communities. Our mission is to assist customers in accelerating business transformation through simple, powerful, and secure solutions. We are seeking an ideal candidate who thrives in an innovative, fast-paced environment and embodies qualities such as collaboration, accountability, readiness, and empathy. We value individuals who believe in achieving more as a team and fostering sustainable growth. At SolarWinds, we prioritize attitude, competency, and commitment in our hiring process. Join our team of Solarians who are ready to drive our world-class solutions forward and embrace the opportunity to lead with purpose. If you are looking to advance your career with an exceptional team, SolarWinds is the perfect place for you to grow professionally. Join the Observability Platform team at SolarWinds, where we focus on developing core services and APIs for our next-generation observability products. Specifically, the Telemetry & Data APIs team concentrates on creating scalable APIs and backend systems that enable internal teams and customers to access, query, and analyze large amounts of telemetry data in real time. We are currently looking for a Senior Software Engineer to join our Telemetry and Data APIs team. In this role, you will be responsible for building scalable APIs and services that drive customer-facing telemetry features within our platform. This position is ideal for individuals who are passionate about working with data-heavy systems, API design, and optimizing data queries rather than traditional ETL or pure data science tasks. Your responsibilities will include designing and maintaining systems that ingest, process, and present telemetry data (metrics, logs, traces) through well-crafted APIs, empowering customers to efficiently interpret and act upon their data. Key Responsibilities: - Design, develop, and maintain REST and GraphQL APIs to provide customers access to telemetry data. - Write and optimize high-performance telemetry data retrieval queries in Clickhouse. - Develop scalable backend services using Java or Kotlin with Spring Boot. - Collaborate with product and front-end teams to deliver user-friendly telemetry features. - Ensure that systems are observable, reliable, secure, and easy to operate in production. - Participate in code reviews and design discussions, providing mentorship where necessary. Qualifications: - 5+ years of software engineering experience focusing on building scalable backend services. - Proficiency in Java or Kotlin with experience in Spring/Spring Boot frameworks. - Hands-on experience in designing and building RESTful and/or GraphQL APIs. - Ability to write and optimize SQL queries, with Clickhouse experience being a plus. - Familiarity with TypeScript/JavaScript and capability to navigate front-end code if required. - Understanding of cloud environments (AWS, Azure, GCP) and container orchestration (Kubernetes). - Strong grasp of system design, data structures, and algorithms. Preferred Qualifications: - Experience with time-series data, telemetry systems, or observability platforms. - Exposure to GraphQL server implementation and schema design. - Previous involvement in SaaS environments with high-scale data workloads. - Familiarity with modern CI/CD practices and DevOps tooling. All applications will be handled in compliance with the SolarWinds Privacy Notice available at: https://www.solarwinds.com/applicant-privacy-notice,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You will be working on a low-latency trading middleware for Solana, focusing on a high-performance transaction broadcasting engine used by top trading firms and validators. The current system handles 2M+ transactions per day, but only 0.01% of these transactions are successful, presenting a challenge that you will be tasked with solving. Your main mission will be to understand and optimize the transaction pipeline. You will investigate why transactions fail or are slow to process, identify patterns that contribute to success or failure based on geographies or clients, and explore ways to enhance the speed of our infrastructure down to the microsecond level. You will be working with various technologies including ClickHouse for raw request logs, Postgres for transaction outcomes and metadata, Grafana and server logs for validator and router performance, and our geo-distributed validator and router stack. Your responsibilities will include analyzing and enhancing user flow landing rates, optimizing performance across endpoints, network paths, and validators, reducing infrastructure costs by improving orchestration and utilization, reverse-engineering competitors and creating benchmarks, and suggesting innovative strategies to monetize traffic, bundles, and client insights.,
Posted 2 weeks ago
7.0 - 12.0 years
25 - 30 Lacs
Bengaluru
Remote
As a Database Administrator - Senior, you will be responsible for: Supporting day-to-day operations with large-scale, highly available, reliable, high-performing databases hosted in distributed datacenters Analyzing and improving the existing database architecture Being a liaison for database-related problems between operations, architecture, development, and QA groups Maintaining high availability of DB infrastructure Managing orchestration and configuration of DB infrastructure Solving issues with linux systems and cloud infrastructure housing our databases with SA help Creating and maintaining documentation that is relevant to your role and duties Nice to have: Familiarity with ClickHouse in the areas of database administration Willingness to learn & become proficient with the ClickHouse and all DBA duties associated with it. Do what you love To be successful in this role you will: Have 8 years of relevant experience and a Bachelors degree in Computer Science or equivalent Have substantial DBA experience with deployment, configuration, management, and support of highly available systems Show proficiency in all aspects of database administration including install, setup, troubleshooting, backup and recovery, replication and proactive monitoring Have working Experience with SQL and NoSQL databases like Cassandra, MongoDB, MySQL, & Redis - as well as best practices and data modeling Have experience working on installation, configuration, capacity management, administration of various NoSQL and RDBMS databases Have experience with Linux system administration as needed for DBA Be familiar with scripting languages such as Python or shell Be familiar with CI/CD, Docker & Version Control Systems such as Git and Perforce Be a good team player and able to collaborate with people from across the business Be willing to expand your knowledge of other database technologies and passion for pushing the limits
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
At SolarWinds, the company prioritizes the well-being of its people. The purpose of SolarWinds is to enhance the lives of all individuals it serves, including employees, customers, shareholders, partners, and communities. By joining SolarWinds, you become part of a mission to assist customers in accelerating business transformation through simple, robust, and secure solutions. The ideal candidate for this position excels in an innovative and fast-paced setting, demonstrating traits of collaboration, accountability, readiness, and empathy. SolarWinds values individuals who believe in achieving more as a team, fostering growth for themselves and others. The company's hiring criteria are based on attitude, competency, and commitment. As a Solarian, you will contribute to advancing world-class solutions in a dynamic environment and embrace the opportunity to lead with purpose. If you are seeking to develop your career within an exceptional team, SolarWinds is the perfect place for you to thrive and evolve. The Observability Platform team at SolarWinds is responsible for developing the fundamental services and APIs that drive the company's next-generation observability products. Specifically, the Telemetry & Data APIs team focuses on constructing scalable APIs and backend systems that enable internal teams and customers to access, query, and analyze large amounts of telemetry data in real-time. We are currently seeking a Senior Software Engineer to join our Telemetry and Data APIs team. In this role, you will be tasked with building scalable APIs and services that support customer-facing telemetry features within our platform. This position is ideal for individuals who enjoy working with data-intensive systems, designing APIs, and optimizing data queries, rather than those seeking traditional ETL or pure data science roles. Your responsibilities will include designing and maintaining systems that ingest, process, and expose telemetry data (such as metrics, logs, and traces) through well-crafted APIs, empowering customers to efficiently comprehend and act on their data. **What You'll Do:** - Design, develop, and maintain REST and GraphQL APIs to provide customers access to telemetry data. - Write and optimize Clickhouse queries to facilitate high-performance telemetry data retrieval. - Create scalable backend services using Java or Kotlin with Spring Boot. - Collaborate with product and front-end teams to deliver user-friendly telemetry features. - Ensure that systems are observable, reliable, secure, and easily operable in production. - Engage in code reviews and design discussions, offering mentorship where appropriate. **What We're Looking For:** - Minimum of 5 years of software engineering experience in building scalable backend services. - Proficiency in Java or Kotlin, with experience in Spring/Spring Boot frameworks. - Hands-on experience in designing and developing RESTful and/or GraphQL APIs. - Comfortable with writing and optimizing SQL queries (experience with Clickhouse is advantageous). - Familiarity with TypeScript/JavaScript and the ability to navigate front-end code if required. - Understanding of cloud environments (AWS, Azure, GCP) and container orchestration (Kubernetes). - Strong grasp of system design, data structures, and algorithms. **Nice to Have:** - Background in time-series data, telemetry systems, or observability platforms. - Exposure to GraphQL server implementation and schema design. - Experience working in SaaS environments with high-scale data workloads. - Familiarity with modern CI/CD practices and DevOps tooling. All applications will be handled in compliance with the SolarWinds Privacy Notice, which can be found at: [SolarWinds Privacy Notice](https://www.solarwinds.com/applicant-privacy-notice),
Posted 2 weeks ago
3.0 - 7.0 years
12 - 22 Lacs
Bengaluru
Work from Office
We are seeking a skilled and motivated Full Stack Engineer with 2.6+ years of hands-on experience in Node.js, React.js, and Next.js. You will be working on high-impact, scalable web applications as part of a fast-paced product development team. Required Candidate profile Worked in any GDS is a plus point Node.js, Express.js, React.js, and Next.js, Mysql Solid understanding of RESTful APIs and backend integrations. Strong knowledge of databases such as , MySQL
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
faridabad, haryana
On-site
As a seasoned Full Stack Engineer with 3-4 years of experience, you will excel in a startup environment and play a significant role in our scaling efforts. Working across the stack, you will be involved in crafting intuitive user interfaces, designing robust backend systems, and integrating advanced data solutions. Your responsibilities will include influencing key architectural decisions, optimizing performance, and leveraging AI-driven approaches to solve complex problems. If you thrive in a fast-paced setting and are passionate about building scalable products, we are excited to hear from you. Your success will be measured by your ability to deliver high-quality, maintainable, and scalable products capable of handling rapid growth. You will play a crucial role in ensuring seamless user experiences, solid backend performance, and secure data management. By proactively addressing technical challenges, improving code quality, and mentoring junior engineers, you will have a direct impact on both our product offering and the efficiency of the broader team. Collaborating closely with product managers, designers, DevOps engineers, and data analysts, you will create features that address real customer needs. Your work will directly influence product evolution and position us for long-term success as we enter new markets, scale existing solutions, and incorporate cutting-edge AI into our applications. **Responsibilities** **Frontend:** - Develop responsive, intuitive interfaces using HTML, CSS (SASS), React, and Vanilla JS. - Implement real-time features using sockets for dynamic, interactive user experiences. - Collaborate with designers to ensure consistent UI/UX patterns and deliver visually compelling products. **Backend:** - Design, implement, and maintain APIs using Python (FastAPI). - Integrate AI-driven features to enhance user experience and streamline processes. - Ensure the code adheres to best practices in performance, scalability, and security. - Troubleshoot and resolve production issues, minimizing downtime and improving reliability. **Database & Data Management:** - Work with PostgreSQL for relational data, ensuring optimal queries and indexing. - Utilize ClickHouse or MongoDB where appropriate to handle specific data workloads and analytics needs. - Contribute to building dashboards and tools for analytics and reporting. - Leverage AI/ML concepts to derive insights from data and improve system performance. **General:** - Use Git for version control; conduct code reviews, ensure clean commit history, and maintain robust documentation. - Collaborate with cross-functional teams to deliver features that align with business goals. - Stay updated with industry trends, particularly in AI and emerging frameworks, and apply them to enhance our platform. - Mentor junior engineers and contribute to continuous improvement in team processes and code quality. **Qualifications** **Required:** - 3-4 years of full-stack development experience in a startup or scaling environment. - Proficiency in frontend technologies: HTML, CSS (SASS), React, Vanilla JS. - Strong backend experience with Python (FastAPI). - Solid understanding of relational databases (PostgreSQL) and performance optimization. - Experience with sockets for real-time applications. - Familiarity with integrating AI or ML-powered features. - Strong problem-solving abilities, attention to detail, and effective communication skills. **Ideal:** - Exposure to Webpack, Handlebars, and GCP services. - Experience in building dashboards and analytics tools. - Knowledge of ClickHouse and MongoDB for specialized workloads. - Prior experience with video calls, AI chatbots, or widgets. - Understanding of cloud environments, deployment strategies, and CI/CD pipelines. - Ability to leverage AI/ML frameworks and tools (e.g., TensorFlow, PyTorch) to improve product features. **Preferred but Not Mandatory:** - Advanced experience in AI-driven optimizations, such as personalized user experiences, predictive analytics, and automated decision-making. - Familiarity with analytics and monitoring tools for performance tracking. - Prior exposure to a high-growth startup environment, meeting rapid iteration and scaling demands. We follow a structured process where upon shortlisting, you will receive a project assignment with a one-week deadline. Successful candidates will proceed with two rounds of interviews, while those not shortlisted will receive feedback to ensure transparency and respect for your time. Join us to work with cutting-edge technologies, including AI-driven solutions, in a rapidly scaling environment. Be part of a collaborative and inclusive team that values impact, ownership, and growth. Enjoy continuous learning and professional development opportunities, along with competitive compensation and benefits aligned with your experience and contributions. If you are passionate about technology, enjoy solving complex problems, and are eager to shape the next phase of a scaling product, apply now and join our journey!,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
Enlog is a leading provider of electricity management solutions for energy-conscious smart buildings. Our mission is to create a greener earth by promoting the smart and efficient use of electricity. Since our inception in 2017, we have focused on transforming homes and businesses into energy-conscious smart spaces, enhancing quality of life and comfort through our innovative energy management technologies. Join our dynamic team and be part of our journey to make energy-conscious living a reality. As a member of our team, you will be responsible for: - Possessing in-depth knowledge of object-relational mapping, server-side logic, and REST API. - Demonstrating expertise in databases such as MySQL, PostgreSQL, and other relational and non-relational databases. - Utilizing AWS Lightsail, Celery, Celery-beat, Redis, and Docker. - Conducting testing of REST APIs using Postman. - Managing code and projects on Git to ensure synchronization with other team members and managers. - Coordinating with front-end developers to ensure seamless integration. Why You Should Work Here: At Enlog, we value innovation, collaboration, and continuous improvement. By joining our team, you will have the opportunity to work on cutting-edge technologies and contribute to projects that have a significant impact on energy management and sustainability. Our dynamic and supportive work environment fosters personal and professional growth. We are committed to maintaining a diverse and inclusive workplace that enables everyone to thrive. Joining Enlog means becoming part of a team dedicated to promoting smart and efficient energy use, and making a positive impact on the environment and society. Technologies We Use: - AWS EC2 - AWS Lightsail - Docker - Kafka - PostgreSQL - Golang - Django REST Framework - MQTT protocols - Kubernetes - PgBouncer - Clickhouse - Scylladb - DragonFly About Enlog: Founded in 2017, Enlog provides electricity management solutions for energy-conscious smart buildings. Our innovations in energy management enhance quality of life and comfort while promoting responsible electricity use. Our flagship product, Smi-Fi, is an energy-assistant IoT device that encourages energy conservation in residential and commercial spaces. With over 3,000 active installations and a growing customer base, we are at the forefront of energy management solutions in India.,
Posted 3 weeks ago
3.0 - 7.0 years
18 - 34 Lacs
Hyderabad
Work from Office
Hiring Data Engineers (4+ yrs) in Bengaluru/Hyderabad. Design ETL pipelines, build data lakes/warehouses, ensure data quality. Skills: Python, SQL, Airflow, Kafka, BigQuery, Spark, AWS/GCP. Work with analysts, PMs, ML teams. Health insurance Provident fund
Posted 3 weeks ago
6.0 - 10.0 years
6 - 10 Lacs
Tiruchirapalli
Work from Office
Role Overview: We are seeking a Technical Product Manager to lead and manage the entire software product development lifecycle from concept to delivery. This role is hands-on and requires a strong engineering background in backend development and modern data technologies, with demonstrated experience in building and delivering complex software products. You will work closely with internal stakeholders, developers, QA, and DevOps teams to ensure each product is planned, developed, tested, and released with precision. Key Responsibilities: Project & Product Lifecycle Management Lead and manage the full product development lifecycle: planning, requirement gathering, validation, estimation, development, testing, and release. Collaborate with stakeholders to define product scope, technical feasibility, and delivery timelines. Conduct technical validation of requirements, helping guide architecture and technology decisions. Own project budgeting, resource allocation, and delivery tracking. Establish and manage sprint plans, task assignments, and ensure timely execution across development teams. Engineering Oversight & Technical Leadership Provide technical leadership to the software development team in: - Node.js, Express.js, React.js, MongoDB, Radis DB - Time-series databases (e.g., OpenSearch, ClickHouse, or Cassandra) experience with any one is required - RESTful API development, WebSocket-based communication Basic understanding of AI/ML concepts and how they integrate into modern applications Assist in code reviews, technical issue resolution, and performance optimization Ensure architectural alignment with business and scalability goals. Process Governance & Delivery Assurance Manage task tracking, sprint velocity, QA cycles, and release planning. Implement robust bug tracking, test coverage reviews, and UAT readiness. Oversee the successful delivery of software builds, ensuring they meet quality and timeline expectations. Prepare and maintain project documentation and release notes. Stakeholder Communication & Reporting Serve as the single point of contact between engineering and leadership for project progress, blockers, and releases. Provide weekly progress reports, metrics, and risk escalations. Facilitate cross-functional communication with QA, DevOps, design, and support teams. Required Qualifications (Must-Have Skills) 6 - 10 years of experience in software product development, including 3+ years in a product/project management or technical lead role. Strong hands-on experience in Node.js, Express.js, React.js and MongoDB. Experience with at least one time-series database (OpenSearch, ClickHouse, or Cassandra). Solid understanding of RESTful APIs, WebSocket protocols, and microservice development. Familiarity with core AI/ML concepts and integration patterns in modern applications. Proven success in delivering at least two software products end-to-end to enterprise or mid-market clients. Strong understanding of Agile/Scrum, sprint planning, backlog grooming, and release cycles. Preferred Skills Experience in building SaaS-based platforms, monitoring tools, or infrastructure management products. Familiarity with cloud hosting environments (AWS, GCP, Azure) and DevOps practices (CI/CD pipelines, Docker/K8s). Exposure to observability stacks, log monitoring, or AI/MLOps products. Working knowledge of QA automation and performance testing tools. Key Attributes Strong ownership and execution mindset. Ability to balance technical depth with product vision. Excellent communication, task management, and stakeholder coordination skills. Comfortable working in fast-paced, evolving product environments.
Posted 3 weeks ago
6.0 - 11.0 years
14 - 24 Lacs
Bengaluru
Work from Office
Automation NoSQL Data Engineer This role has been designed as Onsite with an expectation that you will primarily work from an HPE partner/customer office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in todays complex world. Our culture thrives on finding new and better ways to accelerate whats next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: HPE Operations is our innovative IT services organization. It provides the expertise to advise, integrate, and accelerate our customers outcomes from their digital transformation. Our teams collaborate to transform insight into innovation. In todays fast paced, hybrid IT world, being at business speed means overcoming IT complexity to match the speed of actions to the speed of opportunities. Deploy the right technology to respond quickly to market possibilities. Join us and redefine whats next for you. What you will do: Think through complex data engineering problems in a fast-paced environment and drive solutions to reality. Work in a dynamic, collaborative environment to build DevOps-centered data solutions using the latest technologies and tools. Provide engineering-level support for data tools and systems deployed in customer environments. Respond quickly and professionally to customer emails/requests for assistance. What you need to bring: Bachelor’s degree in Computer Science, Information Systems, or equivalent. 7+ years of demonstrated experience working in software development teams with a strong focus on NoSQL databases and distributed data systems. Strong experience in automated deployment, troubleshooting, and fine-tuning technologies such as Apache Cassandra, Clickhouse, MongoDB, Apache Spark, Apache Flink, Apache Airflow, and similar technologies. Technical Skills: Strong knowledge of NoSQL databases such as Apache Cassandra, Clickhouse, and MongoDB, including their installation, configuration, and performance tuning in production environments. Expertise in deploying and managing real-time data processing pipelines using Apache Spark, Apache Flink, and Apache Airflow. Experience in deploying and managing Apache Spark and Apache Flink operators on Kubernetes and other containerized environments, ensuring high availability and scalability of data processing jobs. Hands-on experience in configuring and optimizing Apache Spark and Apache Flink clusters, including fine-tuning resource allocation, fault tolerance, and job execution. Proficiency in authoring, automating, and optimizing Apache Airflow DAGs for orchestrating complex data workflows across Spark and Flink jobs. Strong experience with container orchestration platforms (like Kubernetes) to deploy and manage Spark/Flink operators and data pipelines. Proficiency in creating, managing, and optimizing Airflow DAGs to automate data pipeline workflows, handle retries, task dependencies, and scheduling. Solid experience in troubleshooting and optimizing performance in distributed data systems. Expertise in automated deployment and infrastructure management using tools such as Terraform, Chef, Ansible, Kubernetes, or similar technologies. Experience with CI/CD pipelines using tools like Jenkins, GitLab CI, Bamboo, or similar. Strong knowledge of scripting languages such as Python, Bash, or Go for automation, provisioning Platform-as-a-Service, and workflow orchestration. Additional Skills: Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Growth, Client Expectations Management, Coaching, Creativity, Critical Thinking, Cross-Functional Teamwork, Customer Centric Solutions, Customer Relationship Management (CRM), Design Thinking, Empathy, Follow-Through, Growth Mindset, Information Technology (IT) Infrastructure, Infrastructure as a Service (IaaS), Intellectual Curiosity (Inactive), Long Term Planning, Managing Ambiguity, Process Improvements, Product Services, Relationship Building {+ 5 more} What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #operations Job: Services Job Level: TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 4 weeks ago
9.0 - 11.0 years
12 - 17 Lacs
Thiruvananthapuram
Work from Office
Educational Bachelor of Engineering,Bachelor Of Science,Bachelor Of Technology,Bachelor Of Comp. Applications,Master Of Technology,Master Of Engineering,Master Of Science,Master Of Comp. Applications Service Line Engineering Services Responsibilities Collect, clean, and organize large datasets from various sources Perform data analysis using statistical methods, machine learning techniques, and data visualization tools Identify patterns, trends, and anomalies within datasets to uncover insights Develop and maintain data models to represent the organization's business operations Create interactive dashboards and reports to communicate data findings to stakeholders Document data analysis procedures and findings to ensure knowledge transfer Additional Responsibilities: High analytical skills A high degree of initiative and flexibility High customer orientation High quality awareness Excellent verbal and written communication skills Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional : 5+ years of experience as a Data Analyst or similar role. Proven track record of collecting, cleaning, analyzing, and interpreting large datasets Expertise in Pipeline designing and Validation Expertise in statistical methods, machine learning techniques, and data mining techniques Proficiency in SQL, Python, PySpark, Looker, Prometheus, Carbon, Clickhouse, Kafka, HDFS and ELK stack (Elasticsearch, Logstash, and Kibana) Experience with data visualization tools such as Grafana and Looker Ability to work independently and as part of a team Problem-solving and analytical skills to extract meaningful insights from data Strong business acumen to understand the implications of data findings Preferred Skills: Technology-Analytics - Packages-Python - Big Data Technology-Reporting Analytics & Visualization-Pentaho Reporting Technology-Cloud Platform-Google Big Data Technology-Cloud Platform-GCP Container services-Google Container Registry(GCR) Generic Skills: Technology-Machine Learning-Python
Posted 1 month ago
3.0 - 4.0 years
2 - 12 Lacs
Bengaluru, Karnataka, India
On-site
Design and develop scalable data pipelines to migrate user knowledge objects from Splunk to ClickHouse and Grafana. Implement data ingestion, transformation, and validation processes to ensure data integrity and performance. Collaborate with cross-functional teams to automate and optimize data migration workflows. Monitor and troubleshoot data pipeline performance and resolve issues proactively. Work closely with observability engineers and analysts to understand data requirements and deliver solutions. Contribute to the continuous improvement of the observability stack and migration automation tools. Required Skills and Qualifications Proven experience as a Big Data Developer or Engineer working with large-scale data platforms. Strong expertise with ClickHouse or other columnar databases, including query optimization and schema design. Hands-on experience with Splunk data structures, dashboards, and reports. Proficiency in data pipeline development using technologies such as Apache Spark, Kafka, or similar frameworks. Strong programming skills in Python, Java, or Scala. Experience with data migration automation and scripting. Familiarity with Grafana for data visualization and monitoring. Understanding of observability concepts and monitoring systems. Would be a plus Experience with Bosun or other alerting platforms. Knowledge of cloud-based big data services and infrastructure as code. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Experience working in agile POD-based teams.
Posted 1 month ago
2.0 - 4.0 years
3 - 11 Lacs
Bengaluru, Karnataka, India
On-site
Proven experience as a Data Analyst, preferably with exposure to observability or monitoring data. Strong proficiency in SQL, especially with ClickHouse or similar columnar databases. Experience with data visualization tools such as Grafana or equivalent. Familiarity with Splunk data structures, dashboards, and reports is a plus. Strong analytical and problem-solving skills with attention to detail. Ability to work collaboratively in a POD-based agile team environment. Good communication skills to present data insights effectively. Key Responsibilities Analyze and validate data during the migration of user knowledge objects from Splunk to ClickHouse and Grafana. Collaborate with engineering teams to ensure data integrity and consistency post-migration. Create and maintain comprehensive reports and dashboards to monitor migration progress and outcomes. Identify discrepancies or data quality issues and work with technical teams to resolve them. Support automation efforts by providing data insights and requirements. Translate complex data findings into clear, actionable recommendations for stakeholders. Team and Work Environment Current team size: [Insert number] Team locations: [Insert locations] The team is growing to support this critical migration, offering opportunities for professional growth and learning. Qualifications Data Analyst with experience in Splunk, clickhouse, Grafana. Nice to Have Experience with alerting systems like Bosun. Knowledge of data migration processes and automation tools. Basic scripting skills (Python, Bash) for data manipulation. Understanding of observability concepts and monitoring frameworks.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough