Jobs
Interviews

1154 Prometheus Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Software Development Engineer (SDE) at our company, you will have the opportunity to be part of a highly competent engineering team that impacts millions of lives. You will collaborate with the entire engineering team to solve problems and enhance the back-end architecture of our growing product. Your role will involve working on existing codebases as well as new projects. Our environment is engineering-driven and your contributions will have an immediate impact. We utilize a variety of tools and technologies including Golang, Java, Postgres, Aerospike, Redis, and Kafka extensively for our back-end development. As an SDE, your responsibilities will include developing highly-concurrent and distributed systems, optimizing performance, ensuring high-availability, designing and testing new features, supporting releases, and estimating development efforts. You will also help define coding standards and processes while being open to learning and adapting to different technologies. The ideal candidate will have hands-on experience in Golang with a focus on production-grade systems. Experience with highly concurrent, distributed architectures, strong knowledge of Data Structures & Algorithms, and familiarity with building HTTP and GRPC based services are essential. You should be comfortable working in a nix environment, adept at problem-solving, and capable of effectively communicating with team members. Additionally, debugging skills, writing clear documentation, unit testing, and integration testing are crucial aspects of this role. Proficiency in SQL and NoSQL databases such as MySQL, Postgres, Redis, REST, Elasticsearch, and MongoDB is required. Knowledge of code versioning tools like Git, deployment on Cloud platforms (AWS, GCP) using tools like Jenkins, Ansible, Consul, Nats, and familiarity with frameworks like statsd, Open tracing, and Prometheus are considered advantageous. If you are passionate about building scalable systems, enjoy working in a collaborative environment, and have the desire to make a real impact, we encourage you to apply for this exciting opportunity as an SDE.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Principal Engineer / Architect at our organization, you will be responsible for combining deep technical expertise with strategic thinking to design and implement scalable, secure, and modern digital systems. This senior technical leadership role requires hands-on architecture experience, a strong command of cloud-native development, and a successful track record of leading teams through complex solution delivery. Your role will involve collaborating with cross-functional teams including engineering, product, DevOps, and business stakeholders to define technical roadmaps, ensure alignment with enterprise architecture principles, and guide platform evolution. Key Responsibilities: Architecture & Design: - Lead the design of modular, microservices-based, and secure architecture for scalable digital platforms. - Define and enforce cloud-native architectural best practices using Azure, AWS, or GCP. - Prepare high-level design artefacts, interface contracts, data flow diagrams, and service blueprints. Cloud Engineering & DevOps: - Drive infrastructure design and automation using Terraform or CloudFormation. - Support Kubernetes-based container orchestration and efficient CI/CD pipelines. - Optimize for performance, availability, cost, and security using modern observability stacks and metrics. Data & API Strategy: - Architect systems that handle structured and unstructured data with performance and reliability. - Design APIs with reusability, governance, and lifecycle management in mind. - Guide caching, query optimization, and stream/batch data pipelines across the stack. Technical Leadership: - Act as a hands-on mentor to engineering teams, leading by example and resolving architectural blockers. - Review technical designs, codebases, and DevOps pipelines to uphold engineering excellence. - Translate strategic business goals into scalable technology solutions with pragmatic trade-offs. Key Requirements: Must Have: - 5+ years in software architecture or principal engineering roles with real-world system ownership. - Strong experience in cloud-native architecture with AWS, Azure, or GCP (certification preferred). - Programming experience with Java, Python, or Node.js, and frameworks like Flask, FastAPI, Celery. - Proficiency with PostgreSQL, MongoDB, Redis, and scalable data design patterns. - Expertise in Kubernetes, containerization, and GitOps-style CI/CD workflows. - Strong foundation in Infrastructure as Code (Terraform, CloudFormation). - Excellent verbal and written communication; proven ability to work across technical and business stakeholders. Nice to Have: - Experience in MLOps pipelines, observability stacks (ELK, Prometheus/Grafana), and tools like MLflow, Langfuse. - Familiarity with Generative AI frameworks (LangChain, LlamaIndex), Vector Databases (Milvus, ChromaDB). - Understanding of event-driven, serverless, and agentic AI architecture models. - Python libraries such as pandas, NumPy, PySpark and support for multi-component pipelines (MCP). Preferred: - Prior experience leading technical teams in regulated domains (finance, healthcare, govtech). - Cloud security, cost optimization, and compliance-oriented architectural mindset. What You'll Gain: - Work on mission-critical projects using the latest cloud, data, and AI technologies. - Collaborate with a world-class, cross-disciplinary team. - Opportunities to contribute to open architecture, reusable frameworks, and technical IP. - Career advancement via leadership, innovation labs, and enterprise architecture pathways. - Competitive compensation, flexibility, and a culture that values innovation and impact.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a DevOps Engineer, you will be responsible for designing, implementing, and managing CI/CD pipelines using GitHub and Azure CI/CD. You will utilize Docker for containerization and Azure Kubernetes Services (AKS) for container orchestration. Implementing Infrastructure as Code (IaC) using Terraform to manage and provision cloud resources will be a key part of your role. Additionally, you will manage secrets and sensitive data using Azure Key Vault and set up logging and monitoring using Prometheus, Grafana, and Azure Monitor. Ensuring high availability and reliability by configuring and managing load balancers will be crucial. Implementing and managing Azure API Management to handle API traffic will also fall under your responsibilities. Collaboration with development teams to ensure applications are designed for continuous delivery and operational stability is essential. You will be expected to identify and resolve issues related to system performance, reliability, and scalability. Participation in on-call rotations to provide 24/7 support for production systems will also be required. Continuously discovering, evaluating, and implementing new technologies to improve development efficiency and application performance is a key aspect of the role. Documenting processes and best practices to facilitate knowledge sharing across the team will be part of your routine tasks. To qualify for this role, you should have a Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent work experience. Proven experience as a DevOps Engineer or in a similar role is necessary. Strong proficiency in GitHub and Azure CI/CD pipelines is required. Experience with Docker and container orchestration using Azure Kubernetes Services (AKS) is a must-have. Expertise in Infrastructure as Code (IaC) using Terraform is highly desirable. Familiarity with Azure Key Vault for managing secrets and proficiency in monitoring and logging tools such as Prometheus, Grafana, and Azure Monitor are crucial. Experience with load balancers and Azure API Management is also necessary. Strong problem-solving skills, attention to detail, excellent communication, and teamwork skills are essential for success in this role.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Cloud Performance QA Engineer at Tarana Wireless India, you will play a crucial role in evaluating the scalability, responsiveness, and resilience of the Tarana Cloud Suite, which encompasses cloud microservices, databases, and real-time communication with intelligent radio devices. Your responsibilities will include conducting performance, load, stress, and soak testing, as well as chaos testing and fault injection to ensure the robustness of the system under real-world and failure conditions. You will collaborate closely with development, DevOps, and SRE teams to proactively identify and address performance issues, analyze bottlenecks, and simulate production-like environments. Your work will involve a deep understanding of system internals, cloud infrastructure (AWS), and modern observability tools, and will directly impact the quality, reliability, and scalability of the next-gen wireless platform developed by Tarana. Key Responsibilities - Understand the Tarana Cloud Suite architecture, including microservices, UI, data/control flows, databases, and AWS-hosted runtime. - Design and implement robust load, performance, scalability, and soak tests using tools like Locust, JMeter, or similar. - Set up and manage scalable test environments on AWS to mimic production loads. - Build and maintain performance dashboards using Grafana, Prometheus, or other observability tools. - Analyze performance test results and infrastructure metrics to identify bottlenecks and optimization opportunities. - Integrate performance testing into CI/CD pipelines for automated baselining and regression detection. - Collaborate with cross-functional teams to define SLAs, set performance benchmarks, and resolve performance-related issues. - Conduct resilience and chaos testing using fault injection tools to validate system behavior under stress and failures. - Debug and root-cause performance degradations using logs, APM tools, and resource profiling. - Tune infrastructure parameters for improved efficiency. Required Skills & Experience - Bachelor's or Masters degree in Computer Science, Engineering, or a related field. - 3-8 years of experience in Performance Testing/Engineering. - Hands-on expertise with Locust, JMeter, or equivalent load testing tools. - Strong experience with AWS services such as EC2, ALB/NLB, CloudWatch, EKS/ECS, S3, etc. - Familiarity with Grafana, Prometheus, and APM tools like Datadog, New Relic, or similar. - Proficiency in scripting and automation (Python preferred) for custom test scenarios and analysis. - Experience with testing and profiling REST APIs, web services, and microservices-based architectures. - Exposure to chaos engineering tools or fault injection practices. - Experience with CI/CD tools and integrating performance tests into build pipelines. Nice to Have - Experience with Kubernetes-based environments and container orchestration. - Knowledge of infrastructure-as-code tools. - Background in network performance testing and traffic simulation. - Experience in capacity planning and infrastructure cost optimization. Join Tarana Wireless India's QA team and contribute to the advancement of fast, affordable internet access globally through cutting-edge technology and innovative solutions.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

noida, uttar pradesh

On-site

Job Description: As a Database Administrator at our Tech-Support department, you will play a crucial role in setting up, configuring, administering, and maintaining various production and development environments. These environments may consist of both Relational databases such as SQL Server, PostgreSQL, MySQL, as well as NoSQL databases like MongoDB or others. You will be based at our Noida office and should have 1-2 years of relevant experience in this field. Your primary responsibility will be to collaborate closely with the tech team to design, build, and operate the database infrastructure. You will provide support to the tech team in identifying optimal solutions for data-related issues such as data modeling, reporting, and data retrieval. Additionally, you will work alongside deployment staff to address any challenges related to the database infrastructure effectively. Requirements: - Ideally, you should hold a BE/B.Tech degree from a reputed institute. - Proficiency in SQL Server/PostgreSQL database administration, maintenance, and tuning is essential. - Experience with database clustering, high availability, replication, backup, auto-recovery, and pooling, such as pgpool2, is required. - Strong familiarity with Logging and Monitoring tools like Nagios, Prometheus, PG-badger, POWA, Data-Dog, etc., is preferred. - Expertise in analyzing complex execution plans and optimizing queries is a must. - Good understanding of various database concepts including indices, views, partitioning, aggregation, window functions, and caching. - Up-to-date knowledge of the latest features in PostgreSQL versions 11/12 and above. - Experience working with AWS and its database-related services like RDS is a necessity. - Previous exposure to other databases like Oracle, MySQL will be advantageous. - Familiarity with NoSQL technologies is a plus. If you meet these requirements and are interested in this role, please share your updated profile with us at hr@techybex.com.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

About Worx-AI Worx-AI is a boutique IT consultancy specializing in Generative AI, predictive analytics, and systems integration. We collaborate with Fortune 500 clients to provide intelligent, scalable solutions that drive enterprise transformation. Our team combines profound technical expertise with strategic insights to construct robust AI/ML systems, modern data platforms, and cloud-native infrastructure. Role Overview We seek a proficient Datastax Engineer to cooperate across our organization with Cloud Engineering, Cloud Operations, and cross-platform teams. This role is vital for ensuring the delivery of cloud resources in accordance with established standards, with a focus on both Azure and AWS platforms. The position offers the chance to implement DataStax (Cassandra, Astra DB, Luna) on impactful AI and data engineering projects for a Fortune 50 client. Key Responsibilities - Design, deploy, and manage hyper-converged database (HCD) DataStax to underpin high-performance, scalable data architectures for AI and analytics workloads. - Fine-tune configurations to guarantee efficient data storage, retrieval, and processing, in alignment with organizational performance and scalability prerequisites. - Monitor and uphold performance levels, applying tuning and troubleshooting methodologies to minimize latency and ensure high availability. - Create and update documentation on setups, configurations, and best practices to facilitate knowledge sharing and operational uniformity. - Coordinate with infrastructure and DevOps teams to roll out and maintain cloud-native environments on AWS and/or Azure. - Work together with data engineering and AI teams to fuse Datastax solutions with machine learning pipelines and real-time data processing frameworks. - Execute data modeling, indexing, and query optimization to assure system performance and reliability. - Enforce security, backup, monitoring, and cost optimization strategies across DataStax deployments. - Integrate DataStax into contemporary data stacks utilizing tools like Kafka, Spark, Kubernetes, etc. - Automate infrastructure and deployments using tools such as Terraform or other IaC frameworks. Qualifications - 5+ years of experience in backend development or data engineering roles. - 3+ years of experience with Apache Cassandra, DataStax Astra DB, or DataStax Enterprise. - Hands-on experience with cloud-native environments on AWS and/or Azure. - Proficient in scripting (Python, Bash, etc.) and infrastructure automation (Terraform, CloudFormation, etc.). - Strong understanding of distributed systems and NoSQL data modeling. - Experience working in hybrid cloud or containerized environments (Kubernetes preferred). - Excellent problem-solving and communication skills. Preferred Qualifications - Relevant certifications in DataStax, AWS, or Azure. - Experience supporting AI/ML platforms and data science workflows. - Familiarity with CI/CD tools, observability platforms (Grafana, Prometheus), and streaming data architectures. - Previous involvement in consulting or direct engagement with enterprise clients is highly beneficial. Why Worx-AI - Engage with cutting-edge technologies and prestigious clients. - Become part of a collaborative, flexible team that prioritizes innovation and high-quality delivery. - Hybrid work model offering the advantages of both remote and in-person collaboration. - Competitive hourly rate of 3,000 and prospects for long-term commitments. Location: Hyderabad, India (Hybrid - 50% Remote) Employment Type: Contract Rate: 3,000/hour,

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Job Summary: We are looking for a Junior Site Reliability Engineer (SRE) with strong Java coding and debugging skills to help maintain the reliability, performance, and scalability of our critical systems. As a Junior SRE, you will work closely with senior engineers to monitor systems, automate processes, and enhance infrastructure reliability. This role is ideal for candidates passionate about Java, DevOps, cloud technologies, and automation in a fast-paced environment. Experience: 2-4 years Key Responsibilities: System Reliability & Performance: Monitor and maintain the availability of key services and applications. Participate in defining and improving SLIs, SLOs, and SLAs for system reliability. Identify and resolve performance bottlenecks and system inefficiencies. Incident Management & Monitoring: Assist in incident response, troubleshooting production issues, and conducting root cause analysis (RCA). ¢ Work on improving monitoring, logging, and alerting systems using tools like Prometheus, Grafana, and Elastic APM. ¢ Participate in on-call rotations and incident handling. Java Coding & Debugging: ¢ Write and debug Java-based applications to enhance system reliability. ¢ Analyze logs, troubleshoot performance issues, and optimize Java services. ¢ Gain hands-on experience with JVM monitoring, thread dumps, and heap analysis. ¢ Work closely with developers to improve the reliability of Java applications. Automation & Infrastructure: ¢ Work with infrastructure as code (IaC) using Helm, or Ansible. ¢ Optimize system configurations for scalability and reliability. ¢ Automate operational tasks to improve system efficiency. Collaboration & Learning: ¢ Work closely with software engineers and senior SREs to enhance system reliability. ¢ Continuously develop knowledge in cloud computing (AWS, Azure, GCP), Kubernetes, and DevOps practices. Skills & Qualifications: Required Skills: ¢ Strong Java programming and debugging skills (must-have). ¢ Experience with Linux systems, networking, and cloud platforms (AWS, Azure, or GCP). ¢ Familiarity with monitoring tools like Prometheus, Grafana, or New Relic. ¢ Experience troubleshooting and analyzing Java application performance. ¢ Strong problem-solving skills and ability to analyze system issues. Preferred Skills (Nice to Have): ¢ Scripting ability in Python, Bash, or Go for automation. ¢ Exposure to Kubernetes and containerization concepts. ¢ Experience with infrastructure-as-code tools like Terraform or Ansible. Why Join Us? ¢ Work with experienced SREs and gain hands-on experience with modern DevOps practices. ¢ Learn and grow in a collaborative and innovative environment. ¢ Gain exposure to cutting-edge cloud, Java, and automation technologies. ¢ Opportunity for career growth into senior SRE roles.

Posted 1 week ago

Apply

3.0 - 7.0 years

13 - 17 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleSDET Automation Test Engineer - Capital Market LocationBangalore/ Pune Work ModeHybrid (3 days WFO - Tues, Wed, Thurs) Shift Time2PM - 11PM IST Capco is seeking an SDET Engineer for the Securities Finance technology group within the Equities Prime Services team. This is a high-impact role where youll design, develop, and maintain automation frameworks for critical trading and finance applications. Youll collaborate with global teams across New York, London, and Hong Kong to deliver scalable, high-quality solutions in a fast-paced Agile environment. Primary Responsibilities Build and maintain robust test automation frameworks for UI, API, and backend systems. Develop and execute test plans, test cases, and test scriptsboth manual and automated. Integrate automated tests into CI/CD pipelines using Jenkins, GitLab CI, or GitHub Actions. Perform regression, integration, and system testing across front-end and back-end components. Collaborate with developers and DevOps engineers to resolve defects and optimize performance. Analyze requirements and design logical, comprehensive test scenarios. Participate in Agile ceremoniessprint planning, stand-ups, retrospectives, and backlog grooming. Maintain test environments and infrastructure using IaC tools like Terraform or Ansible. Contribute to performance and security testing initiatives. Desired Experience/ Skills: 46 years of hands-on experience in QA automation and DevOps. Strong experience with Java, Cucumber, Selenium, and Rest Assured. Familiarity with Tosca, JMeter, and performance testing tools. Solid understanding of SDLC, Agile methodologies, and test-driven development. Proficiency in SQL and database systems (e.g., MySQL, SQL Server). Experience with Jira, Bitbucket, and version control best practices. Exposure to financial services or trading systems is a strong plus. Preferred Creating test strategies and test plans in Agile environments. Experience with functional, regression, and system testing. Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK. ISTQB or relevant certifications. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visitwww.capco.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube.

Posted 1 week ago

Apply

3.0 - 5.0 years

14 - 19 Lacs

Bengaluru

Work from Office

Educational Requirements Master of Science (Technology),Master Of Comp. Applications,Master Of Engineering,Bachelor Of Comp. Applications,Bachelor Of Science (Tech),Bachelor of Engineering,Bachelor Of Technology (Integrated) Service Line Application Development and Maintenance Responsibilities A day in the life of an InfoscionAs part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem-solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Technical and Professional Requirements: Primary skills:Technology-DevOps-DevOps Architecture Consultancy Preferred Skills: Technology-DevOps-DevOps Architecture Consultancy Technology-DevOps-Continuous integration - Others

Posted 1 week ago

Apply

4.0 - 8.0 years

7 - 12 Lacs

Pune

Work from Office

Responsible for IT Infrastructure cross-platform technology areas demonstrating design and build expertise. Responsible for developing, architecting, and building AWS Cloud services with best practices, blueprints, patterns, high-availability and multi-region disaster recovery. Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 3-5 plus years of experience Must have 3 + yrs of relevant experience in Python/ Java, AWS, Terraform/(IaC) Experience in Kubernetes, Docker, Shell scripting. Experienced in scripting languages Python (not someone who can write small scripts) Preferred technical and professional experience Experience using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins, Kubernetes, Maven, and Sonar Qube Experience installing and configuring different application servers such as JBoss, Tomcat, and WebLogic Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus

Posted 1 week ago

Apply

3.0 - 7.0 years

10 - 15 Lacs

Mysuru

Work from Office

The Site Reliability Engineer is a critical role in Cloud based projects. An SRE works with the development squads to build platform & infrastructure management/provisioning automation and service monitoring using the same methods used in software development to support application development. SREs create a bridge between development and operations by applying a software engineering mindset to system administration topics. They split their time between operations/on-call duties and developing systems and software that help increase site reliability and performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall 12+ yrs experience required. Have good exposure to Operational aspects (Monitoring, Automation, Remediations) - Monitoring tools exposure like New Relic, Prometheus, ELK, Distributed tracing, APM, App Dynamics, etc. Troubleshooting and documenting Root cause analysis and automate the incident Understands the Architecture, SRE mindset, Understands data model Platform Architecture and Engineering - Ability to design, architect a Cloud platform that can meet Client SLAs /NFRs such as Availability, system performance etc. SRE will define the environment provisions framework, identify potential performance bottlenecks and design a cloud platform Preferred technical and professional experience Effectively communicate with business and technical team members. Creative problem solving skills and superb communication Skill. Telecom domain experience is an added plus

Posted 1 week ago

Apply

0.0 years

5 - 7 Lacs

Noida, New Delhi, Gurugram

Work from Office

Role & responsibilities Roles and Responsibilities: * Design, implement, and maintain CI/CD pipelines utilizing AzureDevOps. * Develop infrastructure as code (laC) using Terraform for deployment and configuration management in Azure. * Monitor and enhance application and infrastructure security within Azure environments. * Enable automated testing using AzureDevOps and SonarQube for code quality management. * Collaborate with development and operations teams to streamline and automate workflows. * Troubleshoot and resolve issues in development, test, and production environments. * Continuously evaluate and implement improvements to optimize performance, scalability, and efficiency. Qualification: * Proven experience with Azure DevOps for CI/CD pipelines. * Strong proficiency in Terraform for infrastructure provisioning and management in Azure. * In-depth knowledge of Azure services (VMs, App Services, Storage, etc.). * Experience integrating and configuring SonarQube for code quality assessment. * Proficiency in scripting languages like PowerShell, YAML, Python, and shell scripting. * Solid understanding of DevOps best practices and methodologies. * Ability to troubleshoot complex issues and provide effective solutions. * Excellent communication and collaboration skills, with the ability to work effectively in a team environment. Preferred candidate profile * Experience with containerization technologies (Docker, Kubernetes). * Familiarity with monitoring tools (e.g., Prometheus, Grafana). * Knowledge of agile development methodologies. Certification in Azure (e.g., Azure Administrator Associate, Azure DevOps Engineer Expert).(Optional)

Posted 1 week ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Mumbai

Work from Office

We are looking for an experienced DevOps Enginee r to join our infrastructure and platform team. You will play a key role in designing, implementing, and maintaining our CI/CD pipelines, automating infrastructure, ensuring system reliability, and improving overall developer productivity. The ideal candidate is well-versed in On-Prem, cloud platforms, infrastructure as code, and modern DevOps practices. Role & responsibilities Design, build, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI. Automate infrastructure provisioning and configuration using Terraform, Ansible, or CloudFormation. Manage and monitor production and staging environments across On-Prem and cloud platform (AWS). Implement containerization and orchestration using Docker and Kubernetes. Ensure system availability, scalability, and performance via monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, Datadog). Maintain and improve infrastructure security, compliance, and cost optimization. Collaborate with development, QA, and security teams to streamline code deployment and feedback loops. Participate in on-call rotations and troubleshoot production incidents. Write clear and maintainable documentation for infrastructure, deployments, and processes. Preferred candidate profile 315 years of experience in DevOps, SRE, or infrastructure engineering. Proficiency in scripting languages like Bash, Python, or Go. Strong hands-on experience with cloud platforms (preferably AWS). Deep understanding of Docker and Kubernetes ecosystem. Experience with infrastructure automation tools such as Ansible, Terraform or Chef. Familiarity with source control (Git), branching strategies, and code review practices. Solid experience with Linux administration, system performance tuning, and troubleshooting. Knowledge of networking concepts, load balancers, VPNs, DNS, and firewalls. Experience with monitoring/logging tools like Prometheus, Grafana, ELK, Splunk, or Datadog, Nagios, Log Shippers like Filebeat ,Fluentd, Fluent Bit. Familiarity with security tools like Vault, AWS IAM, or cloud workload protection. Experience in high-availability, multi-region architecture design. Strong understanding in creation of RPM packages and Yum Repos. Strong ubnderstanding of Jmeter scripting and test case writing. Strong understanding of Artifact repository Manager (JFROG,Nexus,Maven,NPM,NVM) Installation of open source / enterprise tools from Source file or RPM Packages. Strong understanding of tech stack ( Redis, Mysql, Nginx, RabbitMQ, Tomcat, Apache, JBOSS) Implement cloud-native solutions including load balancers, VPCs, IAM, AutoScaling Group, CDNs, S3,Route 53 etc. SAST tools like SonarQube, CheckMarks, JFrog X-Ray. Expertise in configuring , upgrading the API Gateway Preferbly ( Google Apigee, Kong ) etc.

Posted 1 week ago

Apply

13.0 - 17.0 years

0 Lacs

karnataka

On-site

As a Head of Quality Assurance at Commcise located in Bangalore, you will play a crucial role in managing testing activities to ensure the best user product experience. With 13-15 years of relevant experience, you will need to have an Engineering or IT Degree. Your strong expertise in software testing concepts and methodologies, along with excellent communication skills and technical aptitude, especially in automation, will be essential for this role. Your responsibilities will include having a deep understanding of capital markets, trading platforms, wealth management, and regulatory frameworks such as MiFID, SEC, SEBI, FCA. Experience with financial instruments and post-trade processes will also be necessary. You will be required to define and implement comprehensive testing strategies covering functional and non-functional testing, as well as developing test governance models and enforcing QA best practices. Your role will involve a strong grasp of programming concepts, coding standards, and test frameworks like Java, Python, and JavaScript. Expertise in test automation frameworks such as Selenium and Appium, as well as API testing and knowledge of connectivity protocols, will be advantageous. Understanding AI and Machine Learning applications in test automation and driving AI-driven automation initiatives will be part of your responsibilities. Experience in continuous testing within CI/CD pipelines, knowledge of infrastructure as code and cloud platforms, and familiarity with observability tools for real-time monitoring will also be required. You should have expertise in performance testing tools, security testing methodologies, and experience with resilience testing and chaos engineering. Strong leadership skills, team development abilities, and stakeholder management across various teams will be crucial in this role. Having an Agile mindset, leading Agile testing transformations, and implementing BDD/TDD practices will be part of your responsibilities. Strong strategic planning and execution skills, along with a willingness to be hands-on when required, will be essential for driving collaborative test strategies. This role offers an opportunity to work in a dynamic environment and contribute significantly to ensuring the quality and reliability of products in the financial technology industry.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

You should have a Bachelor's degree in Computer Science or a related field, or equivalent experience. With at least 3 years of experience in a similar role, you must be proficient in at least one backend programming language such as Java, Python, or Go. Additionally, you should have hands-on experience with cloud platforms like AWS, Azure, or GCP. A strong understanding of DevOps principles and practices is essential for this role, along with experience in containerization technologies like Docker and Kubernetes. You should also be familiar with configuration management tools such as Ansible, Puppet, or Chef, and have worked with CI/CD tools like Jenkins or GitLab CI. Excellent problem-solving and troubleshooting skills are a must, along with strong communication and collaboration abilities. Previous experience with databases like PostgreSQL or MySQL, as well as monitoring and logging tools such as Prometheus, Grafana, and ELK stack, is required. Knowledge of security best practices and serverless technologies will be beneficial for this position. This job opportunity was posted by Ashok Kumar Samal from HDIP.,

Posted 1 week ago

Apply

15.0 - 19.0 years

0 Lacs

karnataka

On-site

The responsibilities of the role include partnering with and acting as a trusted advisor to partners in both Consulting Sales and Delivery to assist in defining and delivering high-quality enterprise-capable solutions. You will work closely with team members to develop practical roadmaps for moving the enterprise towards the future state vision, considering business, technical, and delivery constraints. Analyzing partner requirements, current state architecture, and gaps to create a future state architecture vision for parts of the enterprise with a focus on reduced complexity, cost efficiencies, reuse, convergence, reduced risk, and/or improved business capabilities is a key aspect of the role. Additionally, you will participate in defining and operating the architecture governance process to ensure change initiatives align with the vision and roadmaps. Working closely with Domain Architects across key initiatives and projects to apply architecture principles and standards, and develop reference architectures and design patterns is also part of the responsibilities. Communication of principles, standards, vision, and roadmaps to partners and proactively addressing any questions or concerns identified is essential. Providing thought leadership on architectural topics, developing a forward-looking view of current and emerging technologies, and their impact on Enterprise Architecture are also important aspects of the role. Embedding Platform Thinking in all activities, owning and enhancing workflows and processes, promoting an environment of learning and development, and fostering the professional growth of team members are key responsibilities. The ideal candidate will possess a Bachelor's Degree in Engineering, Computer Science, or equivalent, with a Master's degree in Business or Technology being an advantage. A formal architecture certification such as TOGAF or equivalent is required. Candidates should have at least 15 years of experience in the IT industry, preferably in large, complex enterprises, with at least 7 years of experience in Enterprise Architecture in a large, multi-location, multi-national environment. Deep experience in delivering enterprise-scale IT solutions in a heterogeneous technology environment is necessary. Demonstrated expertise in Application Architecture, including EAI, Microservices, and Cloud native technologies, as well as experience in Domain-driven and Event-driven architecture and technologies such as Kafka and Spark, are preferred. Experience with architecting, designing, and developing large-scale retail business banking solutions using Open systems, messaging, dedication DB solutions, log analysis, log-based monitoring, and metrics-driven monitoring is desired. Familiarity with standard process methodologies, formal Architecture frameworks/methodologies, architecture governance frameworks, and heterogeneous technology platforms is expected. A solid understanding of all domains of Enterprise Architecture and practical experience in data modeling, object modeling, design patterns, and Enterprise Architecture tools is required. The candidate should have experience leading teams in the successful deployment of applications built on Cloud or on-prem enterprise environments for large Tier-1 Banks and Financial institutions. Experience with migrating from legacy applications to solutions ensuring minimal downtime, reduced risk, and excellent customer experience is beneficial. IT Strategy consulting experience is an advantage. Excellent verbal, written, and presentation skills are necessary for effectively communicating complex topics. The candidate should be able to think conceptually, identify patterns across different situations, drive consensus among partners with conflicting viewpoints, and manage people and teams effectively. Collaboration skills and the ability to motivate diverse teams are essential for success in this role.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

As a highly skilled Backend Developer, you will utilize your expertise in Kotlin and Java to design, develop, and deploy scalable backend services and microservices for modern cloud-native applications. Your key responsibilities will include building RESTful APIs, deploying applications on AWS, containerizing services using Docker and Kubernetes, implementing monitoring solutions, and optimizing performance and reliability. You will be expected to work closely with frontend developers, DevOps engineers, and product managers to ensure seamless integration and functionality. Your strong programming experience in Kotlin and Java, along with knowledge of RESTful APIs, AWS services, Kubernetes, Docker, and CI/CD pipelines will be essential in this role. Additionally, familiarity with databases, software engineering best practices, and design patterns is required. Preferred skills such as experience with reactive programming, Infrastructure as Code using Terraform or CloudFormation, event-driven architectures, and knowledge of secure coding practices and application monitoring tools are a plus. With 6-8 years of experience in Java Development, including Core Java, Hibernate, J2EE, JSP, and Kotlin, you are well-equipped to excel in this position.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You will be responsible for developing and maintaining high-performance server-side applications in Python following SOLID design principles. You will design, build, and optimize low-latency, scalable applications and integrate user-facing elements with server-side logic via RESTful APIs. Maintaining ETL and Data pipelines, implementing secure data handling protocols, and managing authentication and authorization across systems will be crucial aspects of your role. Additionally, you will ensure security measures and setup efficient deployment practices using Docker and Kubernetes. Leveraging caching solutions for enhanced performance and scalability will also be part of your responsibilities. To excel in this role, you should have strong experience in Python and proficiency in at least one Python web framework such as FastAPI or Flask. Familiarity with ORM libraries, asynchronous programming, event-driven architecture, and messaging tools like Apache Kafka or RabbitMQ is required. Experience with NoSQL and Vector databases, Docker, Kubernetes, and caching tools like Redis will be beneficial. Additionally, you should possess strong unit testing and debugging skills and the ability to utilize Monitoring and Logging frameworks effectively. You should have a minimum of 1.5 years of professional experience in backend development roles with Python. Your expertise in setting up efficient deployment practices, handling data securely, and optimizing application performance will be essential for success in this position.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an experienced Java Developer with over 5 years of expertise, you have a strong background in building scalable, distributed, and high-performance microservices utilizing Spring Boot and Apache Kafka. Your proficiency lies in designing and developing event-driven architectures, RESTful APIs, and integrating real-time data pipelines. You are well-versed in the full software development life cycle (SDLC), CI/CD practices, and Agile methodologies. Your key skills include Java (8/11/17), Spring Boot, Spring Cloud, Apache Kafka (Producer, Consumer, Streams, Kafka Connect), Microservices Architecture, RESTful Web Services, Docker, Kubernetes (basic knowledge), CI/CD (Jenkins, Git, Maven), Relational and NoSQL Databases (MySQL, PostgreSQL, MongoDB), Monitoring (ELK Stack, Prometheus, Grafana - basic), Agile/Scrum methodology, and Unit and Integration Testing (JUnit, Mockito). In your professional journey, you have developed and maintained multiple Kafka-based microservices handling real-time data ingestion and processing for high-volume applications. Your expertise extends to implementing Kafka consumers/producers with error-handling, retries, and idempotency for robust message processing. Additionally, you have designed and deployed Spring Boot microservices integrated with Kafka, PostgreSQL, Redis, and external APIs, showcasing your leadership in performance tuning and optimization to ensure low-latency and fault-tolerant behavior. If you are passionate about leveraging your skills in Java, Spring Boot, Apache Kafka, and microservices architecture to drive impactful projects and contribute to cutting-edge technologies, this opportunity might be the perfect match for you. Thank you for considering this role. Best regards, Renuka Thakur renuka.thakur@eminds.ai,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You are an experienced and motivated DevOps and Cloud Engineer with a strong background in cloud infrastructure, automation, and continuous integration/delivery practices. Your role involves designing, implementing, and maintaining scalable, secure, and high-performance cloud environments on platforms like AWS, Azure, or GCP. You will collaborate closely with development and operations teams to ensure seamless workflow. Your key responsibilities include designing, deploying, and managing cloud infrastructure, building and maintaining CI/CD pipelines, automating infrastructure provisioning, monitoring system performance, managing container orchestration platforms, supporting application deployment, and ensuring security best practices in cloud and DevOps workflows. Troubleshooting and resolving infrastructure and deployment issues, along with maintaining up-to-date documentation for systems and processes, are also part of your role. To qualify for this position, you should have a Bachelor's degree in computer science, Engineering, or a related field, along with a minimum of 5 years of experience in DevOps, Cloud Engineering, or similar roles. Proficiency in scripting languages like Python or Bash, hands-on experience with cloud platforms, knowledge of CI/CD tools and practices, and familiarity with containerization and orchestration are essential. Additionally, you should have a strong understanding of cloud security and compliance standards, excellent analytical, troubleshooting, and communication skills. Preferred qualifications include certifications like AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, or equivalent, as well as experience with GitOps, microservices, or serverless architecture. Join our technology team in Trivandrum and contribute to building and maintaining cutting-edge cloud environments while enhancing our DevOps practices.,

Posted 1 week ago

Apply

6.0 - 12.0 years

0 Lacs

karnataka

On-site

As a DevOps Engineer at Capgemini, you will have the opportunity to shape your career according to your aspirations in a supportive and inspiring environment. You will work with a collaborative global community of colleagues to push the boundaries of what is achievable. By joining us, you will play a key role in assisting the world's top organizations in harnessing the full potential of technology to create a more sustainable and inclusive world. Your responsibilities will include building and managing CI/CD pipelines using tools such as Jenkins, GitLab CI, and Azure DevOps. You will automate infrastructure deployment using Terraform, Ansible, or CloudFormation, and set up monitoring systems with Prometheus, Grafana, and ELK. Managing containers with Docker and orchestrating them through Kubernetes will be a crucial part of your role. Additionally, you will collaborate closely with developers to integrate DevOps practices into the Software Development Life Cycle (SDLC). To excel in this position, you should ideally possess 6 to 12 years of experience in DevOps, CI/CD, and Infrastructure as Code (IaC). Your expertise should extend to Docker, Kubernetes, and cloud platforms such as AWS, Azure, or GCP. Experience with monitoring tools like Prometheus, Grafana, and ELK is essential, along with knowledge of security, compliance, and performance aspects. Being ready for on-call duties and adept at handling production issues are also required skills for this role. At Capgemini, you will enjoy a flexible work environment with hybrid options, along with a competitive salary and benefits package. Your career growth will be supported through opportunities for SAP and cloud certifications. You will thrive in an inclusive and collaborative workplace culture that values teamwork and diversity. Capgemini is a global leader in business and technology transformation, facilitating organizations in their digital and sustainable evolution. With a diverse team of over 340,000 members across 50 countries, Capgemini leverages its 55-year legacy to deliver comprehensive services and solutions, ranging from strategy and design to engineering. The company's expertise in AI, generative AI, cloud, and data, combined with industry knowledge and partnerships, enables clients to unlock the true potential of technology to meet their business requirements effectively.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a DevOps Engineer at NTT DATA Business Solutions, your role involves implementing and maintaining cloud infrastructure to ensure the smooth operation of the environment. You will be responsible for evaluating new technologies in infrastructure automation and cloud computing, looking for opportunities to enhance performance, reliability, and automation. Additionally, you will provide DevOps capability to team members and customers, perform code deployments, and manage release activities. Your responsibilities will also include resolving incidents and change requests, documenting solutions, and communicating them to users. You will work on optimizing existing solutions, diagnosing, troubleshooting, and resolving issues to ensure the smooth operation of services. Demonstrating a proactive attitude and aptitude for taking ownership of your work and collaborating with team members will be crucial. To excel in this role, you are required to have a Bachelor's degree in IT, computer science, computer engineering, or a related field, along with a minimum of 6 years of overall experience with at least 3 years as a DevOps Engineer. Advanced experience with Cloud Infrastructure and Cloud Services, particularly on Microsoft Azure, is essential. You should also have expertise in container orchestration (Kubernetes, Docker, Helm), Linux scripting (Bash, Python), log and metrics management (ELK Stack), monitoring tools (Prometheus, Loki, Grafana, Dynatrace), and infrastructure as code (Terraform). Furthermore, you must be proficient in continuous integration/continuous delivery tools (Gitlab CI, Jenkins, Nexus), infrastructure security principles, Helm, CI/CD pipelines configuration, and DevOps tools like Jenkins, SonarQube, Nexus, etc. Exposure to SDLC and Agile processes, SSO integrations, and AI tools is desirable. In addition to technical skills, you should possess strong attitude, soft, and communication skills. Experience in handling technically critical situations, driving expert teams, and providing innovative solutions is essential. Critical thinking, a DevOps mindset, and customer-centric thinking are key attributes for this role. Proficiency in English (written and spoken) is mandatory, while knowledge of other languages such as German or French is a plus. If you are looking to join a dynamic team at NTT DATA Business Solutions and transform SAP solutions into value, this opportunity is for you. Get empowered by our innovative and collaborative work environment. For further inquiries regarding this position, please contact the Recruiter, Pragya Kalra, at Pragya.Kalra@nttdata.com. Join us in our mission to deliver cutting-edge IT solutions and become a part of our global success story!,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

We are seeking a skilled and dedicated FreeSWITCH Engineer with hands-on experience in VoIP systems to join our team. As a FreeSWITCH Engineer, you will be instrumental in the development, configuration, and maintenance of scalable and reliable FreeSWITCH-based voice infrastructures. Your responsibilities will include designing, deploying, and maintaining FreeSWITCH servers and related VoIP infrastructure. You will troubleshoot and resolve FreeSWITCH and VoIP-related issues, develop custom dial plans, modules, and call routing logic, and work with SIP, RTP, and related VoIP protocols. Monitoring system performance, ensuring high availability, collaborating with development, network, and support teams, and documenting configurations and system changes will also be part of your role. To be successful in this position, you should have a minimum of 2 years of hands-on experience with FreeSWITCH in a production environment, a strong understanding of VoIP technologies and SIP protocol, experience with Linux system administration, and familiarity with scripting languages such as Bash, Python, and Lua. The ability to work independently in a remote setup, strong problem-solving and analytical skills are also essential. Preferred skills include experience with other VoIP platforms like Asterisk, Kamailio, OpenSIPS, knowledge of WebRTC, RTP engines, or media servers, exposure to monitoring tools like Grafana and Prometheus, familiarity with APIs and backend integration. Join us for a collaborative and supportive team environment where you will have the opportunity to work on innovative VoIP solutions at scale.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Cloud Architect at FICO, you will play a crucial role in architecting, designing, implementing, and managing cloud infrastructure solutions using tools like ArgoCD, Crossplane, GitHub, Terraform, and Kubernetes. You will lead initiatives to enhance our Cloud and GitOps best practices, mentor junior team members, collaborate with cross-functional teams, and ensure that our cloud environments are scalable, secure, and cost-effective. Your responsibilities will include designing, deploying, and managing scalable cloud solutions on public cloud platforms such as AWS, Azure, or Google Cloud, developing deployment strategies, utilizing Infrastructure as Code tools like Terraform and Crossplane, collaborating with various teams, providing mentorship, evaluating and recommending new tools and technologies, implementing security best practices, ensuring compliance with industry standards, and much more. To be successful in this role, you should have proven experience as a Senior level engineer/Architect in a cloud-native environment, extensive experience with ArgoCD and Crossplane, proficiency in GitHub workflows, experience with Infrastructure as Code tools, leadership experience, proficiency in scripting languages and automation tools, expert knowledge in containerization and orchestration tools like Docker and Kubernetes, network concepts and implementation on AWS, observability, monitoring and logging tools, security principles and frameworks, and familiarity with security-related certifications. Your unique strengths, leadership skills, and ability to drive and motivate a team will be essential in fulfilling the responsibilities of this role. At FICO, you will be part of an inclusive culture that values diversity, collaboration, and innovation. You will have the opportunity to make an impact, develop professionally, and participate in valuable learning experiences. FICO offers competitive compensation, benefits, and rewards programs to encourage you to bring your best every day. You will work in an engaging, people-first environment that promotes work/life balance, employee resource groups, and social events to foster interaction and camaraderie. Join FICO and be part of a leading organization in Big Data analytics, making a real difference in the business world by helping businesses use data to improve their decision-making processes. FICO's solutions are used by top lenders and financial institutions worldwide, and the demand for our solutions is rapidly growing. As part of the FICO team, you will have the support and freedom to develop your skills, grow your career, and contribute to changing the way businesses operate globally. Explore how you can fulfill your potential by joining FICO at www.fico.com/Careers.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

punjab

On-site

You should have a minimum of 5-9 years of experience in Quality Assurance, with at least 2 years focused on test automation. It is essential that you have proven experience leading QA efforts for at least one major software project. Additionally, you should possess demonstrated experience in a full-stack environment, preferably with a strong understanding of MEAN/MERN architecture. Your expertise should include designing, developing, and maintaining robust and scalable test automation frameworks from scratch. You must be proficient in at least one modern programming language relevant to the project's stack, such as JavaScript or TypeScript. Moreover, you should have in-depth knowledge of test automation tools for both front-end and back-end testing, including frameworks like Cypress, Playwright, Selenium, Postman (with scripting), Newman, Mocha, and Chai. It is crucial for you to have a solid understanding of testing methodologies, including unit testing, integration testing, end-to-end testing, and regression testing. Experience with version control systems, specifically Git, is also required. In terms of soft skills, you should have the ability to create, document, and manage comprehensive test plans, strategies, and test cases. Additionally, experience in leading and mentoring a small team of QA engineers is highly valued. Desirable qualifications include prior experience as a developer, experience with security testing and non-functional testing, familiarity with CI/CD pipelines, knowledge of cloud platforms like AWS, GCP, or Azure, and experience with containerization technologies like Docker. You should also be familiar with other testing frameworks and tools like Jest, Mocha, Chai, monitoring and logging tools, and relevant certifications such as ISTQB Foundation Level or Agile Tester. Having strong attention to detail, a proactive approach to quality, and experience with defect tracking and project management tools are also beneficial for this role.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies