Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an Infrastructure Technical Architect at Salesforce Professional Services, you will play a crucial role in enabling customers to leverage MuleSoft platforms while guiding and mentoring a dynamic team. Your expertise and leadership will help establish you as a subject-matter expert in a company dedicated to innovation. You will bring to the table experience in container technology such as Docker and Kubernetes, as well as proficiency in configuring IaaS services on major cloud providers like AWS, Azure, or GCP. Your strong infrastructure automation skills, including familiarity with tools like Terraform and AWS Cloud Formation, will be key in driving success in this role. Additionally, your knowledge of networking, Linux, systems programming, distributed systems, databases, and cloud computing will be valuable assets. In this role, you will engage with high-level command-line interface-written code languages such as Java or C++, along with dynamic languages like Ruby or Python. Your experience in production-level environments will be essential in providing innovative solutions to complex challenges. Preferred qualifications for this role include certifications in Cloud Architecture or Solution Architecture (AWS, Azure, GCP), as well as expertise in Kubernetes and MuleSoft platforms. Experience with DevSecOps, Gravity/Gravitational, Redhat OpenShift, and operators will be advantageous. Your track record of architecting and implementing highly available, scalable, and secure infrastructure will set you apart as a top candidate. Your ability to troubleshoot effectively, along with hands-on experience in performance testing and tuning, will be critical in delivering high-quality solutions. Strong communication skills and customer-facing experience will be essential in managing expectations and fostering positive relationships with clients. If you are passionate about driving innovation and making a positive impact through technology, this role offers a unique opportunity to grow your career and contribute to transformative projects. Join us at Salesforce, where we empower you to be a Trailblazer and shape the future of business.,
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a Software Engineer at Picarro, you will be an integral part of our team in Bangalore, working full-time to contribute to the development of next-generation integrated solutions. These solutions incorporate advanced analytical instrumentation, including cutting-edge laser-based gas sensors, tailored to various industries. By leveraging our technology, you will empower end users with reliable and actionable data, enabling them to make informed operational decisions based on the information we provide. Your role will involve conceptualizing, designing, developing, documenting, and maintaining software solutions. You will be responsible for ensuring the delivery of high-quality and sustainable software, actively engaging in all phases of the development lifecycle from inception to deployment. We are looking for individuals who are adept at creating commercial-grade applications that adhere to established coding standards, design patterns, and technical specifications. Moreover, we foster a culture that values innovation, providing you with opportunities to collaborate with industry experts and explore creative solutions. Minimum Qualifications: - Bachelor's degree in Computer Science, Computer Engineering, or a related field - Enthusiastic about crafting high-quality software within a collaborative team setting - Over 4 years of experience with web standards such as HTML, CSS, and JavaScript - At least 2 years of hands-on experience with NodeJS and related technologies - Proficiency in programming languages like TypeScript, JavaScript, Python, and C# - Familiarity with SQL databases like SQL Server, PostgreSQL, SQLite, and InfluxDB - Extensive experience in API development and integration - Ability to translate designs and wireframes into robust code - Proficient in software engineering tools and practices, including test-driven development Preferred Qualifications: - Understanding of UX and UI principles with a focus on best practices - Experience in designing and developing applications on public clouds like AWS or Azure - Familiarity with Docker-based containers and Kubernetes-based orchestration systems About Picarro: Picarro, Inc. is a global leader in the production of greenhouse gas and optical stable isotope instruments, utilized across diverse scientific and industrial domains. Our instruments play a crucial role in applications ranging from atmospheric science and air quality monitoring to food safety and ecology. Headquartered in Santa Clara, California, our products are manufactured and exported worldwide, underpinned by patented cavity ring-down spectroscopy (CRDS) technology. Picarro's solutions are distinguished by their precision, user-friendliness, portability, and reliability, setting new standards in the industry.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Engineer at UBS, you will have the opportunity to design and build next-generation developer platforms on the cloud using a variety of technologies. Your role will involve iteratively refining user requirements and writing code to deliver sustainable solutions. You will be part of the DevCloud team, providing developers with the necessary tooling and compute resources to engineer solutions effectively. Key Responsibilities: - Build solutions for developers to ensure they have the right tooling and compute resources at their disposal. - Design, write, and deliver sustainable solutions using modern programming languages such as Typescript and Go. - Provide technology solutions and automation to solve business application problems and enhance our digital leadership in financial services. - Write and create applications and platform delivery requirements for the entire IT organization. - Conduct code reviews, test your code as needed, participate in application architecture and design, and other phases of the SDLC. - Implement proper operational controls and procedures to facilitate the transition from testing to production. Your Expertise: - Strong programming experience in Golang and Typescript. - Proficiency in front-end technologies like React, API building, and server-side work. - Experience with Linux containers, Kubernetes, TCP/IP, and networking concepts. - Knowledge of Azure, AWS, and/or Google Cloud. - Understanding of microservice architecture and experience in building RESTful services. - Bias towards automation and hands-on experience with Terraform. - Familiarity with metrics, alerting, and modern monitoring tools such as InfluxDB, Prometheus, Datadog, Grafana, etc. - Knowledge of Continuous Integration and Continuous Deployment, with experience in building pipelines in GitLab, ADO. About UBS: UBS is the world's largest and the only truly global wealth manager, operating through four business divisions in over 50 countries. We offer flexible working arrangements and embrace a purpose-led culture that fosters collaboration and agile ways of working to meet business needs. Join #teamUBS to make a meaningful impact and grow professionally within a diverse and inclusive environment. Please note that as part of the application process, you may be required to complete one or more assessments to showcase your skills and expertise.,
Posted 4 days ago
2.0 - 12.0 years
0 Lacs
punjab
On-site
The SecOps Engineer Lead position requires 5 to 12 years of experience in the field. It is desirable for the candidate to have strong Linux and patching skills, along with an understanding of the Change Management Process. Familiarity with tools like Telegraph, Influxdb, Chronograph, Kapacitor, Grafana, Indeni, and Mandiant would be beneficial. The ability to work in different shifts is also necessary. Knowledge of MITRE and cyber security threats is a plus, as well as having at least 2 years of experience in Linux Patching. Responsibilities of the role include proactively planning and remediating vulnerabilities and technical security requirements. The candidate is expected to communicate and report progress on patching activities to stakeholders, as well as monitor and track the progress of other team members in different engineering towers. Additionally, the SecOps Engineer Lead will be responsible for helping and coordinating team members in onboarding or offboarding processes, building relationships with relevant stakeholders, monitoring security controls against various threats, and discussing weaknesses with the relevant teams. The candidate should also have the ability to carry out other technical responsibilities and provide consultations to relevant stakeholders.,
Posted 1 week ago
2.0 - 5.0 years
7 - 14 Lacs
Mumbai
Work from Office
Job Title: Software Developer Trading Systems / FinTech Infrastructure Company: Qode Advisors LLP Location: Mumbai (On-site) Experience: 2–5 years CTC: Competitive + Performance Bonuses Industry: Financial Services / Quant Trading / FinTech About Us We are a proprietary trading and investment management firm building the next generation of quant infrastructure. Our work spans high-frequency execution, systematic strategy research, risk systems, and automation at scale. We believe in deep work, strong ownership, and engineering that is both elegant and reliable. If you're excited to build real systems that move real money, this is your playground. Role Overview Were looking for a Software Developer to join our core engineering team. Youll work closely with traders, quants, and risk teams to design, develop, and scale systems that power trading, research, and portfolio management. This is a high-impact role with strong learning opportunities and real ownership. Responsibilities Build and maintain robust, low-latency systems for trading, execution, data ingestion, or portfolio reporting Work with quants and researchers to productionize research code Design modular, scalable APIs and backend systems Optimize existing code for performance and reliability Handle large datasets and time series data pipelines Own features end-to-end — from architecture to deployment Follow clean coding practices, maintain code hygiene and documentation Tech Stack We Use Languages: Python, C++, Java (you should be proficient in at least one) Tools: Git, Docker, REST APIs, WebSockets Databases: PostgreSQL, Redis, InfluxDB, TimescaleDB Frameworks: Flask / FastAPI / Vue.js (for full-stack roles) Others: Pandas, NumPy, Linux, Shell scripting Bonus: Knowledge of market data systems (e.g. Kite API, LSEG, FIX), or experience in trading/quantitative finance What We’re Looking For 2–5 years of software development experience Strong in data structures, design patterns, and clean coding Passionate about building systems that work under real-time pressure Ability to collaborate across teams and take ownership Willingness to learn financial concepts if not already known (Optional but Preferred): Experience in a trading or financial services firm What We Offer A fast-moving, intellectually intense environment Real ownership and exposure to live systems from Day 1 Competitive compensation with performance-linked bonuses A founder-led team that values autonomy and deep thinking
Posted 1 week ago
5.0 - 10.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Educational Requirements Bachelor of Engineering Service Line Infosys Quality Engineering Responsibilities As our Senior Quality Assurance Engineer, you embrace the following responsibilities: Take ownership and responsibility for the design and development of all aspects of testing. Work on acceptance criteria and test scenarios with the Product Owner and development team. Design, execute, and maintain test scenarios and automation capabilities for all test levels and types (e.g., automated, regression, exploratory, etc.). Create and optimize test frameworks and integrate them into deployment pipelines. Participate in the code review process for both production and test code to ensure all critical cases are covered. Monitoring test runs, application errors and performance. Making information flow, keeping the team informed and being a stakeholder in releases and defect tracking. Promote and coach the team towards a quality-focused mindset. Influence and lead the team towards continuous improvement and best testing practices. Be the reference of the QA Center of Practice, promoting their practices and influencing their strategy, bringing your team experience into their plan. Additional Responsibilities: These are some of the technologies/frameworks/practices we use: NodeJs with Typescript React and NextJS Contentful CMS Optimizely experimentation platform Micro-services, Event streams and file exchange CI/CD with Jenkins pipeline AWS and Terraform InfluxDB, Grafana, Sensu, ELK stack Infrastructure as a code, one-click deployment Docker, Kubernetes Amazon Web Services and cloud deployments (S3, SNS, SQS, RDS, DynamoDB, etc.), using tools such as Terraform or AWS CLI Git, Scrum, Pair Programming, Peer Reviewing InfluxDB, Kibana, Grafana, Sensu Technical and Professional Requirements: As a Senior Quality Assurance Engineer, you must be able to provide among these: Ability to work in an autonomous, self-responsible and self-organised way. 6+ years of experience in software testing, manual and automated Strong Experience working with modern test automation frameworks and tools (Cypress, Playwright, Jest, React testing libraries). Strong experience in different testing practices (from unit to load to endurance to cross-platform) specifically integrated within CI/CD. Experience in continuous testing practices in production by leveraging BOT and virtual users Experience working with CI/CD pipelines and monitoring tools (e.g. Jenkins, TeamCity, Kibana, Grafana, etc.). Knowledge of API testing, REST protocol and microservice architecture concepts. Postman, AWS Able to effectively communicate in English. Comfortable in developing test automation frameworks from scratch and maintaining existing frameworks. Knowledge of software testing theory. Preferred Skills: Technology-Automated Testing-Automated Testing - ALL
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
nashik, maharashtra
On-site
As an End-to-End Architecture Designer, you will be responsible for architecting distributed, event-driven, and microservices-based systems similar to public cloud platforms. Your expertise will be crucial in leveraging containers, Kubernetes, and hybrid virtualization environments (Hypervisor, OpenStack) to design and implement robust architectures. Your role will also involve evaluating and recommending the appropriate mix of tools, frameworks, and design patterns for each project. It is essential to balance performance, cost, scalability, and maintainability while making these strategic technology decisions. In alignment with Infrastructure & DevOps practices, you will define infrastructure as code (IaC) strategies and integrate DevOps, AIOps, and MLOps practices into system delivery pipelines. This includes working with tools like GitLab, Jira, and cloud-native CI/CD workflows to streamline the development process. Furthermore, your expertise in Data & Integration Architecture will be utilized to design secure, high-performance system and database architectures. Technologies such as PgSQL, MongoDB, Redis, InfluxDB, Kafka, and ESB patterns will be employed to support real-time, analytical, and transactional workloads effectively. Scalability & Resilience are key aspects of your role, where you will leverage knowledge of distributed computing, SDN/SDS, and container orchestration to build robust systems capable of handling high throughput with minimal latency and effective failure recovery mechanisms. Your responsibilities will also include developing and maintaining comprehensive UML models, architectural diagrams, and technical documentation. These artifacts will be essential in effectively communicating design intent across technical and non-technical stakeholders. Mentorship & Governance will play a significant role in your position, where you will provide architectural oversight, code-level guidance, and mentorship to development teams. Ensuring adherence to architectural standards, KPIs, and KRAs through reviews and active collaboration will be crucial for project success. Continuous Innovation is a core aspect of your role, where you will stay updated with emerging technologies and best practices. You will be expected to propose architectural improvements that leverage advancements in AI/ML, cloud-native development, and intelligent automation to drive continuous innovation within the organization. Your expertise in cloud platforms engineering such as AWS, Azure, or GCP, along with a strong understanding of architectural patterns and design principles, will be instrumental in fulfilling the requirements of this role. Proficiency in architecture diagramming tools, documentation, data structures, algorithms, and programming languages like Python and Go will be highly beneficial. Excellent leadership and communication skills are essential for effective collaboration with cross-functional teams. A strategic mindset with the ability to assess the long-term implications of architectural decisions will be critical in driving the organization towards its goals.,
Posted 1 week ago
5.0 - 8.0 years
5 - 8 Lacs
Gurugram
Work from Office
We are looking for a talented and experienced Distributed Database (DDB) Programmer, Design Architect or Specialist who is skilled in programming, designing and managing large-scale, distributed database systems that offer scalability to add more servers as needed. This role requires hands-on experience in optimizing database structures for efficient processing of large datasets, advanced query optimization techniques. Main Skills Required: Distributed Database (DDB): Expert knowledge in managing and designing Distributed Databases. Other Skills: SQL: Strong proficiency in SQL for database management and query optimization. Database Technologies: Knowledge of KDB+, ClickHouse, TimeScaleDB, InfluxDB, MS SQL Server, PostgreSQL with Citus etc. Programming Languages: Knowledge of C#/C++. Professional Experience: Proven experience in designing and managing large-scale distributed databases. Strong programming skill. Key Responsibilities: Design and Implement Large-Scale Data: Architect and manage distributed database systems to handle large-scale data. Develop strategies for data distribution and replication to ensure high availability and reliability. Develop and Optimize Complex Queries: Create and refine complex queries for efficient data retrieval and manipulation. Implement query optimization techniques to enhance performance. Database Structure Optimization: Optimize database structures to improve processing speed and storage efficiency. Continuously monitor and tune database performance. Manage Team: Lead and manage a team of database administrators and developers. Provide guidance, mentorship, and support to team members to ensure successful project execution. C/C++ Application: Build high-performance data processing applications in C/C++. Develop efficient algorithms and data structures to handle large-scale data processing tasks. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand their data requirements. Deliver appropriate data solutions to meet business needs. Troubleshooting and Issue Resolution: Identify and resolve issues related to large-scale data processing. Ensure the integrity and security of the database systems.
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
The Tech Lead Quantitative Trading position in Chennai, India requires a candidate with over 7 years of experience to undertake various key responsibilities. You will be responsible for designing and optimizing scalable backend systems using Python and C++, overseeing the deployment of real-time trading algorithms, managing cloud infrastructure, CI/CD pipelines, and API integrations, as well as leading and mentoring a high-performing engineering team. Additionally, you will play a crucial role in laying the foundation for AI-driven trading innovations. Your role demands strong leadership, software architecture expertise, and hands-on problem-solving skills to ensure the seamless execution and scalability of trading systems. As part of your responsibilities, you will lead the end-to-end development of the trading platform, ensuring scalability, security, and high availability. You will also architect and optimize backend infrastructure for real-time algorithmic trading and large-scale data processing, design and implement deployment pipelines and CI/CD workflows for efficient code integration, and introduce best practices for performance tuning, system reliability, and security. In the realm of backend and data engineering, you will own the Python-based backend, work on low-latency system design to support algorithmic trading strategies, optimize storage solutions for handling large-scale financial data, and implement API-driven architectures leveraging WebSocket API and RESTful API knowledge to integrate with brokers, third-party data sources, and trading systems. Furthermore, you will be responsible for monitoring and troubleshooting live trading systems to minimize downtime, handling broker communication during execution issues and API failures, setting up automated monitoring, logging, and alerting for production stability, leading, mentoring, and scaling a distributed engineering team, defining tasks, setting deadlines, and managing workflow using Zoho Projects, aligning team objectives with OKRs, driving execution, fostering a strong engineering culture, ensuring high performance and technical excellence, managing cloud infrastructure to ensure high availability, setting up monitoring, logging, and automated alerting for production stability, overseeing GitLab repositories, enforcing best practices for version control, and implementing robust CI/CD pipelines to accelerate deployment cycles. While preferred qualifications include 7+ years of hands-on experience in backend development with expertise in Python, proven experience leading engineering teams and delivering complex projects, strong knowledge of distributed systems, real-time data processing, and cloud computing, experience with DevOps, CI/CD, and containerized environments, familiarity with GitLab, AWS, and Linux-based cloud infrastructure, and bonus knowledge of quantitative trading, financial markets, or algorithmic trading. The ideal candidate for this position is a backend expert with a passion for building scalable, high-performance systems, enjoys leading teams, mentoring engineers, fostering a strong engineering culture, can balance hands-on coding with high-level architecture and leadership, thrives in a fast-paced, data-driven environment, and loves solving complex technical challenges.,
Posted 1 week ago
7.0 - 12.0 years
7 - 11 Lacs
Mumbai, Bengaluru
Work from Office
Location PAN India As per companys designated LTIM locations Shift Type Rotational Shifts including Night Shift and Weekend Availability Experience 7 Years of Exp Job Summary We are looking for a skilled and adaptable Site Reliability Engineer SRE Observability Engineer to join our dynamic project team The ideal candidate will play a critical role in ensuring system reliability scalability observability and performance while collaborating closely with development and operations teams This position requires strong technical expertise problemsolving abilities and a commitment to 247 operational excellence Key Responsibilities Site Reliability Engineering Design build and maintain scalable and reliable infrastructure Automate system provisioning and configuration using tools like Terraform Ansible Chef or Puppet Develop tools and scripts in Python Go Java or Bash for automation and monitoring Administer and optimize LinuxUnix systems with a strong understanding of TCPIP DNS load balancers and firewalls Implement and manage cloud infrastructure across AWS or Kubernetes Maintain and enhance CICD pipelines using tools like Jenkins ArgoCD Monitor systems using Prometheus Grafana Nagios or Datadog and respond to incidents efficiently Conduct postmortems and define SLAsSLOs for system reliability and performance Plan for capacity and performance using benchmarking tools and implement autoscaling and failover systems Observability Engineering Instrument services with relevant metrics logs and traces using OpenTelemetry Prometheus Jaeger Zipkin etc Build and manage observability pipelines using Grafana ELK Stack Splunk Datadog or Honeycomb Work with timeseries databases eg InfluxDB Prometheus and log aggregation platforms Design actionable s and dashboards to improve system observability and reduce fatigue Partner with developers to promote observability best practices and define key performance indicators KPIs Required Skills Qualifications Proven experience as an SRE or Observability Engineer in complex production environments Handson expertise in LinuxUnix systems and cloud infrastructure AWSKubernetes Strong programming and scripting skills in Python Go Bash or Java Deep understanding of monitoring logging and ing systems Experience with modern Infrastructure as Code and CICD practices Ability to analyze and troubleshoot production issues in realtime Excellent communication skills to collaborate with crossfunctional teams and stakeholders Flexibility to work in rotational shifts including night shifts and weekends as required by project demands A proactive mindset with a focus on continuous improvement and reliability Additional Requirements Excellent communication skills to collaborate with crossfunctional teams and stakeholders Flexibility to work in rotational shifts including night shifts and weekends as required by project demands A proactive mindset with a focus on continuous improvement and reliability
Posted 1 week ago
4.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Remote
Job Title: Database Administrator (DBA) IoT Data & Performance Optimization Location: Bengaluru / Remote Job Type: Full-time / Contract Experience Level: Mid-Level Department: Data & Infrastructure Job Summary: We are seeking an experienced and performance-driven Database Administrator (DBA) to join our team. The ideal candidate will be responsible for ensuring optimal database utilization and writing complex, high-performance queries on large-scale IoT data . You will play a key role in maintaining database health, optimizing performance, and supporting advanced analytics use cases. Key Responsibilities: Database Performance Optimization Monitor and tune database performance, resource usage, and query execution plans. Analyze and improve indexing strategies, partitioning, and caching mechanisms. Identify and address performance bottlenecks in real-time data pipelines. Complex Query Development Design, write, and optimize complex SQL queries for high-volume IoT datasets. Collaborate with data engineers and analysts to support real-time and batch data processing. Develop procedures and scripts for automation, data transformations, and reporting. Database Management & Maintenance Maintain the availability, integrity, and security of databases (e.g., Big Query, PostgreSQL, MySQL, TimescaleDB, InfluxDB, etc.). Plan and implement backup, restore, and disaster recovery strategies. Support schema design and data modeling for large-scale time-series data. Collaboration & Documentation Work closely with development, DevOps, and data teams to support business needs. Maintain clear documentation of database configurations, structures, and queries. Contribute to best practices and guidelines for database usage across the organization. Required Skills & Qualifications: 3+ years of experience as a DBA or in a similar role, preferably with large-scale IoT or time-series data. Strong expertise in SQL and query optimization. Hands-on experience with performance tuning and monitoring tools. Familiarity with time-series databases (e.g., InfluxDB, TimescaleDB) is a strong plus. Experience with BigQuery and Python is a strong plus. Solid understanding of indexing, sharding, replication, and partitioning. Experience with database security and backup strategies. Familiarity with cloud database solutions (AWS RDS, Azure SQL, GCP) is a plus. Strong problem-solving skills and attention to detail. Nice to Have: Experience working with streaming platforms (Kafka, MQTT, etc.). Exposure to big data ecosystems (e.g., Hadoop, Spark). Scripting skills (e.g., Python, Bash) for automation tasks. Knowledge of DevOps practices and infrastructure-as-code (e.g., Terraform). Note Interested professionals can share their resume to raghav.b@aciinfotech.com
Posted 2 weeks ago
3.0 - 8.0 years
3 - 6 Lacs
Pimpri-Chinchwad
Work from Office
Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 3 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes . In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies e arly, and drive operational excellence. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at What Youll Do Configure and manage observability agents across AWS, Azure & GCP Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack Experience with different language stacks such as Java, Ruby, Python and Go Instrument services using OpenTelemetry and integrate telemetry pipelines Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs Create dashboards, set up alerts, and track SLIs/SLOs Enable RCA and incident response using observability data Secure the observability pipeline You Bring BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering Strong skills in reading and interpreting logs, metrics, and traces Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. Knowledge of OpenTelemetry, IaC, and security best practices Clear documentation of observability processes, logging standards & instrumentation guidelines Ability to proactively identify, debug, and resolve issues using observability data Focused on maintaining data quality and integrity across the observability pipeline
Posted 2 weeks ago
7.0 - 12.0 years
10 - 15 Lacs
Pune
Work from Office
Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 7 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes. In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies early, and drive operational excellence. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. What Youll Do : - Configure and manage observability agents across AWS, Azure & GCP. - Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack. - Experience with different language stacks such as Java, Ruby, Python and Go. - Instrument services using OpenTelemetry and integrate telemetry pipelines. - Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs. - Create dashboards, set up alerts, and track SLIs/SLOs. - Enable RCA and incident response using observability data. - Secure the observability pipeline. You Bring : - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering. - Strong skills in reading and interpreting logs, metrics, and traces. - Proficiency with LGTM (Loki, Grafana, Tempo, Mimir) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. - Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. - Knowledge of OpenTelemetry, IaC, and security best practices. - Clear documentation of observability processes, logging standards & instrumentation guidelines. - Ability to proactively identify, debug, and resolve issues using observability data. - Focused on maintaining data quality and integrity across the observability pipeline.
Posted 2 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 3 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes. In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies ea rly, and drive operational excellence. What Youll Do Configure and manage observability agents across AWS, Azure & GCP Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack Experience with different language stacks such as Java, Ruby, Python and Go Instrument services using OpenTelemetry and integrate telemetry pipelines Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs Create dashboards, set up alerts, and track SLIs/SLOs Enable RCA and incident response using observability data Secure the observability pipeline You Bring BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering Strong skills in reading and interpreting logs, metrics, and traces Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. Knowledge of OpenTelemetry, IaC, and security best practices Clear documentation of observability processes, logging standards & instrumentation guidelines Ability to proactively identify, debug, and resolve issues using observability data Focused on maintaining data quality and integrity across the observability pipeline.
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
The Tech Lead Quantitative Trading role in Chennai, India demands a seasoned professional with over 7 years of experience to take charge of designing and optimizing scalable backend systems using Python and C++. Your primary responsibilities will include overseeing the deployment of real-time trading algorithms, managing cloud infrastructure, CI/CD pipelines, and API integrations, as well as leading and mentoring a high-performing engineering team. Furthermore, you will be instrumental in laying the groundwork for AI-driven trading innovations. As a technical leader, you will be responsible for driving the end-to-end development of the trading platform, ensuring scalability, security, and high availability. You will design and optimize backend infrastructure for real-time algorithmic trading and large-scale data processing, implement deployment pipelines and CI/CD workflows, and introduce best practices for performance tuning, system reliability, and security. In the realm of backend and data engineering, you will own the Python-based backend to ensure efficient real-time data processing, work on low-latency system design to support algorithmic trading strategies, optimize storage solutions for handling large-scale financial data, and implement API-driven architectures leveraging WebSocket API & RESTful API knowledge. During live trading and incident management, you will monitor and troubleshoot live trading systems to minimize downtime, handle broker communication during execution issues and API failures, and set up automated monitoring, logging, and alerting for production stability. Your role will also entail team and project management, where you will lead, mentor, and scale a distributed engineering team, define tasks, set deadlines, and manage workflow using Zoho Projects, align team objectives with OKRs, and foster a strong engineering culture to ensure high performance and technical excellence. Additionally, you will be responsible for DevOps & Cloud Deployment, where you will manage cloud infrastructure, set up monitoring, logging, and automated alerting for production stability, oversee GitLab repositories, and implement robust CI/CD pipelines to accelerate deployment cycles. Preferred qualifications for this position include 7+ years of hands-on experience in backend development with expertise in Python, proven experience in leading engineering teams and delivering complex projects, strong knowledge of distributed systems, real-time data processing, and cloud computing, experience with DevOps, CI/CD, and containerized environments, familiarity with GitLab, AWS, and Linux-based cloud infrastructure, and knowledge in quantitative trading, financial markets, or algorithmic trading is a bonus. If you are a backend expert with a passion for building scalable, high-performance systems, enjoy leading teams and fostering a strong engineering culture, can balance hands-on coding with high-level architecture and leadership, thrive in a fast-paced, data-driven environment, and love solving complex technical challenges, then this role is for you. Join us and enjoy our perks!,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As an online travel booking platform, Agoda is committed to connecting travelers with a vast network of accommodations, flights, and more. With cutting-edge technology and a global presence, Agoda strives to enhance the travel experience for customers worldwide. As part of Booking Holdings and headquartered in Asia, Agoda boasts a diverse team of over 7,100 employees from 95+ nationalities across 27 markets. The work environment at Agoda is characterized by diversity, creativity, and collaboration, fostering innovation through a culture of experimentation and ownership. The core purpose of Agoda is to bridge the world through travel, believing that travel enriches lives, facilitates learning, and brings people and cultures closer together. By enabling individuals to explore and experience the world, Agoda aims to promote empathy, understanding, and happiness. As a member of the Observability Platform team at Agoda, you will be involved in building and maintaining the company's time series database and log aggregation system. This critical infrastructure processes a massive volume of data daily, supporting various monitoring tools and dashboards. The team faces challenges in scaling data collection efficiently while minimizing costs. In this role, you will have the opportunity to: - Develop fault-tolerant, scalable solutions in multi-tenant environments - Tackle complex problems in distributed and highly concurrent settings - Enhance observability tools for all developers at Agoda To succeed in this role, you will need: - Minimum of 8 years of experience in writing performant code using JVM languages (Java/Scala/Kotlin) or Rust (C++) - Hands-on experience with observability products like Prometheus, InfluxDB, Victoria Metrics, Elasticsearch, and Grafana Loki - Proficiency in working with messaging queues such as Kafka - Deep understanding of concurrency, multithreading, and emphasis on code simplicity and performance - Strong communication and collaboration skills It would be great if you also have: - Expertise in database internals, indexes, and data formats (AVRO, Protobuf) - Familiarity with observability data types like logs and metrics and proficiency in using profilers, debuggers, and tracers in a Linux environment - Previous experience in building large-scale time series data stores and monitoring solutions - Knowledge of open-source components like S3 (Ceph), Elasticsearch, and Grafana - Ability to work at low-level when required Agoda is an Equal Opportunity Employer and maintains a policy of considering all applications for future positions. For more information about our privacy policy, please refer to our website. Please note that Agoda does not accept third-party resumes and is not responsible for any fees associated with unsolicited resumes.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a senior data engineer, you will be responsible for working on complex data pipelines dealing with petabytes of data. The Balbix platform serves as a critical security tool for CIOs, CISOs, and sec-ops teams of small, medium, and large enterprises globally, including Fortune 10 companies. Your role will involve solving challenges related to massive cybersecurity and IT data sets by collaborating closely with data scientists, threat researchers, and network experts to address real-world cybersecurity issues. To excel in this role, you must possess excellent algorithm, programming, and testing skills gained from experience in large-scale data engineering projects. Your primary responsibilities will include designing and implementing features, along with taking ownership of modules for ingesting, storing, and manipulating large data sets to cater to various cybersecurity use-cases. You will also be tasked with writing code to provide backend support for data-driven UI widgets, web dashboards, workflows, search functionalities, and API connectors. Additionally, designing and implementing web services, REST APIs, and microservices will be part of your routine tasks. Your aim should be to build high-quality solutions that strike a balance between complexity and meeting functional requirements" acceptance criteria. Collaboration with multiple teams, including ML, UI, backend, and data engineering, will also be essential for success in this role. To thrive in this position, you should be driven to seek new experiences, learn about design and architecture, and be open to taking on progressive roles within the organization. Your ability to collaborate effectively across teams, such as data engineering, front end, product management, and DevOps, will be crucial. Being responsible and willing to take ownership of challenging problems is a key trait expected from you. Strong communication skills, encompassing good documentation practices and the ability to articulate thought processes in a team setting, will be essential. Moreover, you should feel comfortable working in an agile environment and exhibit curiosity about technology and the industry, demonstrating a willingness to continuously learn and grow. Qualifications for this role include a MS/BS degree in Computer Science or a related field with a minimum of three years of experience. You should possess expert programming skills in Python, Java, or Scala, along with a good working knowledge of SQL databases like Postgres and NoSQL databases such as MongoDB, Cassandra, and Redis. Experience with search engine databases like ElasticSearch is preferred, as well as familiarity with time-series databases like InfluxDB, Druid, and Prometheus. Strong fundamentals in computer science, including data structures, algorithms, and distributed systems, will be advantageous for fulfilling the requirements of this role.,
Posted 3 weeks ago
6.0 - 11.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Hiring: Python Developer with Grafana Expertise- Immediate Joiners Preferred Location - Pan India Start - July Joiners only Experience - 6+ years What We're Looking For: Were urgently hiring a Python Developer who also brings strong hands-on experience in Grafana dashboard development and data integration. You'll be part of a team building monitoring and observability solutions for applications developed in Python, GoLang, and Flutter — and visualizing key metrics using Grafana . Must-Have Skills: Strong experience in Python development (backend) Deep hands-on experience with Grafana (creating dashboards, data visualization) Working knowledge of data sources like: Prometheus , InfluxDB , Elasticsearch Experience integrating APIs or logs into dashboards Exposure to real-time monitoring and alerting systems Nice-to-Have Skills: Basic understanding of Core Java Experience with GoLang or Flutter apps (optional) Familiarity with CI/CD pipelines Perks: No client interview Quick onboarding Great project exposure in monitoring/observability space
Posted 4 weeks ago
5.0 - 7.0 years
8 - 15 Lacs
Sholinganallur
Work from Office
We are seeking a highly skilled and experienced Senior Full Stack Developer to join our dynamic team. The ideal candidate will be proficient in both front-end and back-end technologies and capable of leading the design, development, and maintenance of scalable web applications. Key Responsibilities: Design and develop robust and scalable web applications using modern frameworks and technologies. Lead the full software development lifecycle from requirements gathering to deployment and maintenance. Collaborate with cross-functional teams including product managers, designers, and QA engineers. Optimize applications for maximum speed and scalability. Ensure code quality through test-driven development and code reviews. Stay current with emerging technologies and best practices. Key Skills & Technologies: Frontend Development: React.js, JavaScript, Next.js Backend Development: Node.js, NestJS Databases: MongoDB Other Technologies: Redis, Kafka, InfluxDB, WebSocket Additional Skills: Typescript, Architectural Design Preferred (but not required) Skills: Experience with Blockchain and Cryptocurrency technologies Knowledge of Artificial Intelligence concepts and applications Requirements: 5+ years of experience in full stack development. Strong problem-solving skills and attention to detail. Proven experience with RESTful APIs and modern application architectures. Excellent communication skills and the ability to mentor junior developers.
Posted 4 weeks ago
6.0 - 11.0 years
16 - 31 Lacs
Pune
Hybrid
Mandatory Skills : Design, develop, and maintain Grafana dashboards to visualize data from applications developed with Go lang, Flutter, and Python, etc Integrate Grafana with various data sources, including Prometheus, InfluxDB, Elasticsearch, and other relevant systems Good to have Skills : Database Knowledge: Strong understanding of Elasticsearch and other databases. Core Java Knowledge: Basic knowledge of Core Java is a plus. CI/CD Processes: Experience with Continuous Integration/Continuous Deployment (CI/CD) processes is beneficial.
Posted 4 weeks ago
6.0 - 11.0 years
12 - 19 Lacs
Pune
Work from Office
Job Description:- Preferred Qualifications: • Experience: 6-8 years of experience in software development. • Real-Time Monitoring: Familiarity with real-time monitoring solutions. • Team Collaboration: Ability to work effectively as part of a cross-functional team Mandatory Skills: • Design, develop, and maintain Grafana dashboards to visualize data from applications developed with Go lang, Flutter, and Python, etc • Integrate Grafana with various data sources, including Prometheus, InfluxDB, Elasticsearch, and other relevant systems Good to have Skills : • Database Knowledge: Strong understanding of Elasticsearch and other databases. • Core Java Knowledge: Basic knowledge of Core Java is a plus. CI/CD Processes: Experience with Continuous Integration/Continuous Deployment (CI/CD) processes is beneficial. Detailed JD Position Overview: We are seeking a skilled Grafana Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining Grafana dashboards to visualize operational and business data. This role requires a deep understanding of data integration, performance optimization, and user-centric design. Key Responsibilities: • Design, develop, and maintain Grafana dashboards to visualize data from applications developed with Go lang, Flutter, and Python. • Integrate Grafana with various data sources, including Prometheus, InfluxDB, Elasticsearch, and other relevant systems • Performance Optimization: Optimize dashboards for performance, scalability, and real-time insights. • Stakeholder Collaboration: Work closely with stakeholders to understand their data visualization requirements and ensure dashboards meet their needs. • User-Friendly Design: Ensure dashboards are user-friendly, intuitive, and aligned with organizational goals. Required Skills: • Grafana Expertise: Proven experience with Grafana and other data visualization tools. • Data Integration: Proficiency in integrating various data sources into Grafana. • Database Knowledge: Strong understanding of Elasticsearch and other databases. • Core Java Knowledge: Basic knowledge of Core Java is a plus. • CI/CD Processes: Experience with Continuous Integration/Continuous Deployment (CI/CD) processes is beneficial.
Posted 1 month ago
6.0 - 11.0 years
9 - 17 Lacs
Pune
Hybrid
Mandatory Skills : Design, develop, and maintain Grafana dashboards to visualize data from applications developed with Go lang, Flutter, and Python, etc Integrate Grafana with various data sources, including Prometheus, InfluxDB, Elasticsearch, and other relevant systems Good to have Skills : Database Knowledge: Strong understanding of Elasticsearch and other databases. Core Java Knowledge: Basic knowledge of Core Java is a plus. CI/CD Processes: Experience with Continuous Integration/Continuous Deployment (CI/CD) processes is beneficial.
Posted 1 month ago
6.0 - 9.0 years
32 - 35 Lacs
Noida, Kolkata, Chennai
Work from Office
Dear Candidate, We are hiring a Rust Developer to build safe, concurrent, and high-performance applications for system-level or blockchain development. Key Responsibilities: Develop applications using Rust and its ecosystem (Cargo, Crates) Write memory-safe and zero-cost abstractions for systems or backends Build RESTful APIs, CLI tools, or blockchain smart contracts Optimize performance using async/await and ownership model Ensure safety through unit tests, benchmarks, and fuzzing Required Skills & Qualifications: Proficient in Rust , lifetimes , and borrowing Experience with Tokio , Actix , or Rocket frameworks Familiarity with WebAssembly , blockchain (e.g. Substrate) , or embedded Rust Bonus: Background in C/C++ , systems programming, or cryptography Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 1 month ago
5.0 - 8.0 years
13 - 17 Lacs
Bengaluru
Work from Office
We are looking for people who believe in challenging the status quo and are ready to be a part of this change. If you are the one who is looking to take a leap of faith and work on the technology of the future, if you obsess over customer satisfaction and experience then we are looking for you. What we do: We implement high data pipelines throughout using Kafka and Java. We build the world's prettiest and most intuitive user interfaces using React, Angular, Typescript, and other OSS libraries. We use a variety of other Open Source technologies including MySQL, Redis, RocksDB, InfluxDB, and more. We write reusable, efficient, and highly concurrent code. We are proud of the technology we build but we are not dogmatic about our techniques. We frequently re-evaluate our decisions and proactively make improvements to avoid last-minute chaos. What youll be doing: Work on highly maintainable and scalable components/systems/infra. Develop good, and effective tools and scripts to optimize or eliminate manual processes; improve overall system reliability. Participate in on-call rotation & and debugging during outages. Actively work on own learning and development, on tech as well as product side. Exhibit ownership & accountability when it comes to timelines, system uptime, and production SLA. Data-driven - collect & build metrics for the system, infra, platform, and business. Mentoring and guiding the team members. Exhibit ownership and leadership skills required to become an indispensable part of the engineering team and culture. Key Requirements: 5-8 years of experience in building scalable, highly critical distributed systems. B.Tech in Computer Science or equivalent from a reputed college. Excellent programming skills in Python, Go or Ruby or any other popular language. Shell scripting is de facto. Encouraging and building automated processes wherever possible. Strong in Networking (triaging, packet loss, routing, protocols, TCP/IP stack), OS and Docker / Containerization. Experience in working on Distributed Systems with deep knowledge of fundamental principles (architectures, micro-services, high-availability, elections). Thorough understanding of cloud service delivery (DevOps) infrastructure ecosystem, operational processes, and orchestration models, specifically AWS. Hands on experience with building large, scalable CI/CD systems. Excellent skills in investigating and troubleshooting complicated systems/platforms, and identifying key points of failure. Monitoring & Logging best practices. Experience in configuration/infra provisioning management systems, specifically Ansible, Terraform.
Posted 1 month ago
5.0 - 10.0 years
9 - 12 Lacs
Hyderabad
Work from Office
Greetings from IDESLABS, We are looking for the Cloud Database Engineer for contract and FTE roles Job details Skill: Cloud Database Engineer Experience: 5+ Location: Pan india Job Description and please share the profiles at Primary skills: Hana, Hadoop, Redis, Kafka, InfluxDB, Postgres, Cassandra, and MySQL Terraform and Ansible Python Cloud Database Engineer Primary Responsibilities The Cloud Database Engineer provides 24x7 operational support and designs and creates automation for the Hana, Hadoop, Redis, Kafka, InfluxDB, Postgres, Cassandra, and MySQL instances supporting the SAP Procurement and Business Network cloud platform in public cloud, SAP Business Technology Platform, and SAP Managed Data Centers Data Dynamo: Design, deploy, and maintain highly available, scalable, and secure cloud databases to support real-time data processing and fuel data-driven insights. Support Superhero: Provide 24x7 support, monitoring, and proactive maintenance for our critical databases, ensuring uptime and availability around the clock. Automation Alchemist: Leverage Terraform and Ansible to automate database provisioning, configuration, and scaling, reducing manual effort and boosting efficiency. Python Wizardry: Harness the power of Python to develop custom scripts, automation tools, and data pipelines that streamline database operations and enhance performance. Git Guardian: Collaborate with development teams to version control database changes effectively using Git, ensuring seamless integration and traceability. Performance Maestro: Dive deep into data performance analysis, fine-tuning queries, optimizing database configurations, and collaborating with developers to achieve peak efficiency. Security Sentinel: Implement robust access controls, encryption, and monitoring to safeguard our data assets, maintaining the highest levels of data security. Collaboration Virtuoso: Work hand-in-hand with cross-functional teams, offering database expertise, and driving successful integration with applications and analytics platforms. Qualifications Database Mastery: Extensive experience managing databases, including SAP HANA, Hadoop, Redis, Cassandra, MySQL, and InfluxDB, with a strong understanding of relational and NoSQL databases. Cloud Expertise: Proven proficiency in cloud database management on major platforms (AWS, Azure, or GCP), with the ability to optimize database performance in a cloud environment. Automation Wizardry: Hands-on experience with infrastructure automation tools like Terraform and configuration management tools like Ansible, coupled with Python proficiency for automation, and git for source code management. Innovation Mindset: A track record of innovation, staying abreast of industry trends, and a passion for exploring and integrating new technologies. Problem-Solving Ninja: Demonstrated ability to identify complex technical issues, devise creative solutions, and implement effective problem-solving strategies. Collaborative Spirit: Excellent communication skills and the ability to work collaboratively with cross-functional teams in an agile environment. Bachelor's Degree: A Bachelor's degree in Computer Science, Engineering, or a related field. Advanced degrees are a plus. 3+ years experience in database administration
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough