Jobs
Interviews

1633 Grafana Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

10 - 20 Lacs

Chennai

Work from Office

Dear Candidate, Greetings from Genworx.ai About Us Genworx.ai is a pioneering startup at the forefront of generative AI innovation, dedicated to transforming how enterprises harness artificial intelligence. We specialize in developing sophisticated AI agents and platforms that bridge the gap between cutting-edge AI technology and practical business applications. We have an opening for Principal DevOps Engineer position at Genworx.ai . please find below detailed Job Description for your understanding. Required Skills and Qualifications: Job Title: Principal DevOps Engineer Experience: 8+ years with atleast 5+ years in cloud automation Education: Bachelors or Masters degree in Computer Science, Engineering or a related field Work Location: Chennai Job Type: Full-Time Website: https://genworx.ai/ Key Responsibilities: Cloud Strategy and Automation Leadership: Architect and lead the implementation of cloud automation strategies with a primary focus on GCP. Integrate multi-cloud environments by leveraging AWS and/or Microsoft Azure as needed. Define best practices for Infrastructure as Code (IaC) and automation frameworks. Technical Architecture & DevOps Practice Design scalable, secure, and efficient CI/CD pipelines using industry-leading tools. Lead the development and maintenance of automated configuration management systems. Establish processes for continuous integration, delivery, and deployment of cloud-native applications. Develop solutions for cloud optimization and performance tuning Create reference architectures for DevOps solutions and best practices Establish standards for cloud architecture, versioning, and governance Lead cost optimization initiatives for cloud infrastructure using GenAI Security, Compliance, & Best Practices Enforce cloud security standards and best practices across all automation and deployment processes. Implement role-based access controls and ensure compliance with relevant regulatory standards. Continuously evaluate and enhance cloud infrastructure to mitigate risks and maintain high security Research & Innovation: Drive research into emerging GenAI technologies and techniques in cloud automation and DevOps. Lead proof-of-concept development for new AI capabilities Collaborate with research teams on model implementation and support Guide the implementation of novel AI architectures Leadership & Mentorship: Provide technical leadership and mentorship to teams in cloud automation, DevOps practices, and emerging AI technologies. Drive strategic decisions and foster an environment of innovation and continuous improvement. Act as a subject matter expert and liaison between technical teams, research teams, and business stakeholders Technical Expertise: Cloud Platforms: Deep GCP expertise with additional experience in AWS and/or Microsoft Azure. DevOps & Automation Tools: Proficiency in CI/CD tools (e.g., GitHub Actions, GitLab, Azure DevOps) and Infrastructure as Code (e.g., Terraform). Containerization & Orchestration: Experience with Docker, Kubernetes, and container orchestration frameworks. Scripting & Programming: Strong coding skills in Python, Shell scripting, or similar languages. Observability: Familiarity with tools like Splunk, Datadog, Prometheus, Grafana, and similar solutions. Security: In-depth understanding of cloud security, identity management, and compliance requirements. Interested candidates, kindly send your updated resume and a link to your portfolio to anandraj@genworx.ai . Thank you Regards, Anandraj B Lead Recruiter Mail ID: anandraj@genworx.ai Contact: 9656859037 Website: https://genworx.ai/

Posted 1 week ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

What This Job Entails: The software engineer designs, develops, troubleshoots, and debugs software programmes for enhancements and new products. They will also develop software and tools in support of design, infrastructure, and technology platforms. Scope: Resolves a wide range of issues in creative ways Seasoned, experienced professional with a full understanding of their specialty Works on problems of a diverse scope. Receives little instruction on day-to-day work and general instruction on new assignments Your Roles and Responsibilities: Provide administration, development, and application support for end users. Design, develop, code, test, and debug system software Design and implement software of embedded devices and systems from requirements to deployment Review code and design Configure and integrate third-party software components and technologies Perform root cause analysis of network, operating system, and other issues related to how technical solutions operate in customer environments Address and resolve escalated software bugs and bug reproduction requests; logging of all incidents and requests; engaging other service desk resources or appropriate service resources to resolve incidents that are beyond the scope of their ability or responsibility Provide limited after-hours and on-call support as needed Engage in review and development of test documentation Work with stakeholders and cross-functional teams on projects and initiatives Other duties as required. This list is not meant to be a comprehensive inventory of all responsibilities assigned to this position Required Qualifications/Skills: Bachelor's degree (B.S./B.A.) from four-college or university and 5 to 8 years related experience and/or training, or equivalent combination of education and experience. Networks with senior internal and external personnel in own area of expertise Demonstrates good judgment in selecting methods and techniques for obtaining solutions Experience working with various development programs Ability to quickly learn customer support processes, tools, and techniques Understanding of how to configure third-party software components and technologies Ability to perform root cause analysis of network, operating system, and other issues Excellent communication skills in English (both speaking and writing) Ability to collaborate and work remotely, including use of communication tools Ability to multitask and self-organise, including prioritization of activities Experience using monitoring systems Attention to detail Preferred Qualifications: Familiarity with system administration Physical Demand & Work Environment: Must have the ability to perform office-related tasks, which may include prolonged sitting or standing Must have the ability to move from place to place within an office environment Must be able to use a computer Must have the ability to communicate effectively. Some positions may require occasional repetitive motion or movements of the wrists, hands, and/or fingers

Posted 1 week ago

Apply

7.0 - 12.0 years

16 - 20 Lacs

Bengaluru

Work from Office

DevOps As a Lead DevOps / Site Reliability Engineer, you will be supporting production and development environments, from creating new and improving existing tools and processes to automating deployment and monitoring procedures, leading continuous integration effort, administering source control systems, deploying and maintaining production infrastructure and applications. What youll do day to day Design and implementation of monitoring strategies. Improving reliability, stability, and performance of production systems. Leading automation of engineering and operations processes. Systems administration and management of production, pre-production, and test environments. Design and optimization of CI/CD pipelines. Maintenance and administration of source control systems. On-call support of production systems. What you must have 7+ years of experience as an SRE, DevOps, or TechOps Engineer. 5+ years of tools development or automation using Python, Perl, Java, or Go . 3+ years of containerization and orchestration experience. Solid experience in managing production environments in a public cloud, AWS preferred. Proficiency in Linux system administration. Experience with monitoring and observability tools: Prometheus, Loki, Grafana. Experience with at least two of the following: Puppet, Salt, Ansible, Terraform. Experience in setting up and supporting CI/CD pipelines.

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Noida

Work from Office

Essential Skills/Basic Qualifications: 6+ year experienced DevOps professionals with in-depth Devops experience ( teamcity , nolio / jenkins , PowerShell scripting, Gitlab) and any add on tech Excellent experience in managing environments and DevOps tools. Knowledge and proven experience in one of the scripting languages (preferably PowerShell, Python, Bash). Knowledge and strong practical experience with CI/CD pipelines - Git, GitLab , deployment tools, Nexus, JIRA, TeamCity/Jenkins, Infrastructure-as-a-Code ( Chef , or alternatives ) Good knowledge of Unix systems and basic understanding of Windows servers Experience in troubleshooting Database issues. Knowledge of middleware tools like Solace, MQ Good collaboration skills Excellent verbal and written communication skills. Open to work in UK Shift (10 Am to 9 Pm IST with 8 hours of productive work) Desirable skills/Preferred Qualifications: Good understanding of ITIL concepts (IPC, ServiceFirst) Experience with Ansible, other Configuration Management tools. Experience with Monitoring Observability solutions Elastic Search stack, Grafana, AppD. Experience with AWS or any other major Public Cloud service, as well as Private Cloud solutions, Big Data (Hadoop) Knowledge and practical experience with any database - MS SQL Server, Oracle, MongoDB, other DBMS. Strong muti-tasking and ability to re-prioritize activities, based on ever-changing external requirements. Stress resistance and ability to work and deliver results under pressure from the numerous parties. Creative thinking and problem solving, mindset oriented towards continuous improvement and delivering service of excellent quality. As a DevOps Engineer, responsibilities include: Work with technical leads and developers to understand applications architecture and business logic and contribute to deployment and integration strategies. Environment Management functions (building new environments refreshing existing ones). Providing operational stability for the key environments. Diagnosis and resolution of environment defects found during SIT and UAT test phases. Engineering DevOps Project Work, e.g., optimization of DevOps delivery (CI/CD). Contribute to the design of lightning-fast DevOps processes, including automated build, release, deployments, and monitoring, to reduce the overall time to market of various new and existing software components. Contributing to the delivery of complex projects in collaboration with global teams across Barclays, to develop new or enhance existing systems. Strong appreciation of development and DevOps best practices. Education Qualification Bachelors degree in Computer Science/Engineering or related field or equivalent professional qualification Mandatory Competencies Development Tools and Management - Development Tools and Management - CI/CD DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Cloud - Azure - Azure Bicep, ARM Templates, Powershell DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Programming Language - Python - Python Shell Database - Sql Server - SQL Packages Beh - Communication and collaboration Database - Oracle - Database Design

Posted 1 week ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Noida

Work from Office

Performance Tester with experience in tools like JMeter, LoadRunner. Performance Tester with strong experience in Grafana, K6 Designing, executing, and analyzing performance tests to identify bottlenecks and optimize system performance. Strong skills in monitoring, troubleshooting, and reporting on system behavior under load are essential Mandatory Competencies Performance Tools - Performance Tools - Jmeter Beh - Communication DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Grafana QA/QE - QA Automation - Core Java QA/QE - QA Automation - Performance testing (Jmeter, Loadrunner, NeoLoad etc)

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 9 Lacs

Noida

Work from Office

Job Title: OpenStack Engineer / Cloud Operations Engineer Experience: 3+ Years Location: Noida Employment Type: Full-time Job Responsibilities: Manage and support production-grade OpenStack environments (Nova, Neutron, Glance, Keystone, Cinder, Horizon, etc.) Handle day-to-day operations: provisioning, system upgrades, patching, incident response, and troubleshooting Automate tasks and workflows using Ansible , Terraform , Bash , or Python Ensure system observability using tools like Prometheus , Grafana , Zabbix , or ELK Stack Maintain high availability, backup, and disaster recovery strategies Collaborate with DevOps, platform engineering, and network/security teams for seamless cloud operations Maintain documentation, playbooks, and runbooks for operations and incident response Desired Skills: Strong knowledge of OpenStack Cloud Computing Experience in Linux administration and Shell Scripting Familiarity with CI/CD tools Hands-on knowledge of Monitoring Tools : Grafana , Prometheus , Zabbix , or ELK Stack Exposure to virtualization and infrastructure as code Value Adds: Strong analytical and problem-solving skills Good communication and team collaboration Key Skills: OpenStack , Linux , Shell Scripting , Ansible , Terraform , Bash , Python , CI/CD , Grafana , Prometheus , Zabbix , ELK Stack , Virtualization , Cloud Operations , Monitoring Tools Interested Candidates: Please share your updated resume along with the following details to Anurag.yadav@Softenger.com WhatsApp: 7385556898 Total Experience Relevant Experience Current CTC Expected CTC Notice Period Current Location Willing to Relocate to Noida

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Noida

Work from Office

Essential Skills/Basic Qualifications: 6+ year experienced DevOps professionals with in-depth Devops experience (teamcity, nolio/ jenkins, PowerShell scripting, Gitlab) and any add on tech Excellent experience in managing environments and DevOps tools. Knowledge and proven experience in one of the scripting languages (preferably PowerShell, Python, Bash). Knowledge and strong practical experience with CI/CD pipelines - Git, GitLab, deployment tools, Nexus, JIRA, TeamCity/Jenkins, Infrastructure-as-a-Code (Chef, or alternatives) Good knowledge of Unix systems and basic understanding of Windows servers Experience in troubleshooting Database issues. Knowledge of middleware tools like Solace, MQ Good collaboration skills Excellent verbal and written communication skills. Open to work in UK Shift (10 Am to 9 Pm IST with 8 hours of productive work) Desirable skills/Preferred Qualifications: Good understanding of ITIL concepts (IPC, ServiceFirst) Experience with Ansible, other Configuration Management tools. Experience with Monitoring / Observability solutions Elastic Search stack, Grafana, AppD. Experience with AWS or any other major Public Cloud service, as well as Private Cloud solutions, Big Data (Hadoop) Knowledge and practical experience with any database - MS SQL Server, Oracle, MongoDB, other DBMS. Strong muti-tasking and ability to re-prioritize activities, based on ever-changing external requirements. Stress resistance and ability to work and deliver results under pressure from the numerous parties. Creative thinking and problem solving, mindset oriented towards continuous improvement and delivering service of excellent quality. As a DevOps Engineer, responsibilities include: Work with technical leads and developers to understand applications architecture and business logic and contribute to deployment and integration strategies. Environment Management functions (building new environments / refreshing existing ones). Providing operational stability for the key environments. Diagnosis and resolution of environment defects found during SIT and UAT test phases. Engineering / DevOps / Project Work, e.g., optimization of DevOps delivery (CI/CD). Contribute to the design of lightning-fast DevOps processes, including automated build, release, deployments, and monitoring, to reduce the overall time to market of various new and existing software components. Contributing to the delivery of complex projects in collaboration with global teams across Barclays, to develop new or enhance existing systems. Strong appreciation of development and DevOps best practices. Education Qualification Bachelors degree in Computer Science/Engineering or related field or equivalent professional qualification Mandatory Competencies Development Tools and Management - Development Tools and Management - CI/CD DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Cloud - Azure - Azure Bicep, ARM Templates, Powershell DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Programming Language - Python - Python Shell Database - Sql Server - SQL Packages Beh - Communication and collaboration Database - Oracle - Database Design

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 12 Lacs

Gurugram

Work from Office

Location: NCR Team Type: Platform Operations Shift Model: 24x7 Rotational Coverage / On-call Support (L2/L3) Team Overview The OpenShift Container Platform (OCP) Operations Team is responsible for the continuous availability, health, and performance of OpenShift clusters that support mission-critical workloads. The team operates under a tiered structure (L2, L3) to manage day-to-day operations, incident management, automation, and lifecycle management of the container platform. This team is central to supporting stakeholders by ensuring the container orchestration layer is secure, resilient, scalable, and optimized. L2 OCP Support & Platform Engineering (Platform Analyst) Role Focus: Advanced Troubleshooting, Change Management, Automation Experience: 3–6 years Resources : 5 Key Responsibilities: Analyze and resolve platform issues related to workloads, PVCs, ingress, services, and image registries. Implement configuration changes via YAML/Helm/Kustomize. Maintain Operators, upgrade OpenShift clusters, and validate post-patching health. Work with CI/CD pipelines and DevOps teams for build & deploy troubleshooting. Manage and automate namespace provisioning, RBAC, NetworkPolicies. Maintain logs, monitoring, and alerting tools (Prometheus, EFK, Grafana). Participate in CR and patch planning cycles. L3 – OCP Platform Architect & Automation Lead (Platform SME) Role Focus: Architecture, Lifecycle Management, Platform Governance Experience: 6+ years Resources : 2 Key Responsibilities: Own lifecycle management: upgrades, patching, cluster DR, backup strategy. Automate platform operations via GitOps, Ansible, Terraform. Lead SEV1 issue resolution, post-mortems, and RCA reviews. Define compliance standards: RBAC, SCCs, Network Segmentation, CIS hardening. Integrate OCP with IDPs (ArgoCD, Vault, Harbor, GitLab). Drive platform observability and performance tuning initiatives. Mentor L1/L2 team members and lead operational best practices. Core Tools & Technology Stack Container Platform: OpenShift, Kubernetes CLI Tools: oc, kubectl, Helm, Kustomize Monitoring: Prometheus, Grafana, Thanos Logging: Fluentd, EFK Stack, Loki CI/CD: Jenkins, GitLab CI, ArgoCD, Tekton Automation: Ansible, Terraform Security: Vault, SCCs, RBAC, NetworkPolicies

Posted 2 weeks ago

Apply

4.0 - 8.0 years

15 - 30 Lacs

Noida

Work from Office

Role & responsibilities Drive microservices architecture design and evolution, owning the roadmap (service boundaries, integration, tech choices) for scalability, and defining Kubernetes container sizing and resource allocation best practices. Deep expertise in microservices architecture, designing RESTful/event-driven services, defining boundaries, optimizing communication, with experience in refactoring/greenfield and cloud patterns (Saga, Circuit Breaker). Lead platform improvements, overseeing technical enhancements for AI-driven features like our AI Mapping Tool for smarter capabilities. Architect comprehensive observability, deploying metrics, tracing, logging tools (OpenTelemetry, Prometheus, Grafana, Loki, Tempo) for real-time monitoring and high uptime. Define container sizing and lead Kubernetes performance benchmarking, analyzing bottlenecks to guide resource tuning and scaling for platform growth. Provide deployment/infrastructure expertise, guiding Helm for Kubernetes and collaborating on infrastructure needs (Terraform a plus). Lead tooling/automation enhancements, streamlining deployment via Helm improvements, simpler YAML, and pre-deployment validation to reduce errors. Lead evolution to event-driven, distributed workflows, decoupling orchestrators with RabbitMQ and patterns like Saga/pub-sub, integrating Redis for state/caching, improving fault tolerance/scalability. Collaborate across teams and stakeholders for architectural alignment, translating requirements into design and partnering for seamless implementation. Mentor engineers on coding, design, and architecture best practices, leading reviews and fostering engineering excellence. Responsible for documenting architecture decisions (diagrams, ADRs), clearly communicating complex technical concepts for roadmap transparency. Preferred candidate profile Required 5+ years in software engineering, significant experience in designing distributed systems, and a proven track record of improving scalability/maintainability. Extensive production experience with Kubernetes and Docker, proficient in deploying, scaling, and managing apps on clusters, including cluster management on major cloud platforms. Proficient in deployment automation/config management, required Helm charts experience, familiar with CI/CD/GitOps, and Terraform/IaC exposure is a plus. Strong experience implementing observability via monitoring/logging frameworks (Prometheus, Grafana, ELK/Loki, tracing), able to instrument applications, and proven in optimizing distributed system performance. Hands-on with message brokers (RabbitMQ/Kafka) and distributed data stores like Redis, skilled in asynchronous system design and solution selection. Excellent technical communication and leadership, proven ability to lead architectural discussions/build consensus, comfortable driving projects and collaborating with Agile, cross-functional teams. Adept at technical documentation/diagrams, with an analytical mindset for evaluating new technologies and foreseeing design impacts on scalability, security, and maintainability.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

15 - 30 Lacs

Noida

Work from Office

Role & responsibilities Drive microservices architecture design and evolution, owning the roadmap (service boundaries, integration, tech choices) for scalability, and defining Kubernetes container sizing and resource allocation best practices. Deep expertise in microservices architecture, designing RESTful/event-driven services, defining boundaries, optimizing communication, with experience in refactoring/greenfield and cloud patterns (Saga, Circuit Breaker). Lead platform improvements, overseeing technical enhancements for AI-driven features like our AI Mapping Tool for smarter capabilities. Architect comprehensive observability, deploying metrics, tracing, logging tools (OpenTelemetry, Prometheus, Grafana, Loki, Tempo) for real-time monitoring and high uptime. Define container sizing and lead Kubernetes performance benchmarking, analyzing bottlenecks to guide resource tuning and scaling for platform growth. Provide deployment/infrastructure expertise, guiding Helm for Kubernetes and collaborating on infrastructure needs (Terraform a plus). Lead tooling/automation enhancements, streamlining deployment via Helm improvements, simpler YAML, and pre-deployment validation to reduce errors. Lead evolution to event-driven, distributed workflows, decoupling orchestrators with RabbitMQ and patterns like Saga/pub-sub, integrating Redis for state/caching, improving fault tolerance/scalability. Collaborate across teams and stakeholders for architectural alignment, translating requirements into design and partnering for seamless implementation. Mentor engineers on coding, design, and architecture best practices, leading reviews and fostering engineering excellence. Responsible for documenting architecture decisions (diagrams, ADRs), clearly communicating complex technical concepts for roadmap transparency. Preferred candidate profile Required 5+ years in software engineering, significant experience in designing distributed systems, and a proven track record of improving scalability/maintainability. Extensive production experience with Kubernetes and Docker, proficient in deploying, scaling, and managing apps on clusters, including cluster management on major cloud platforms. Proficient in deployment automation/config management, required Helm charts experience, familiar with CI/CD/GitOps, and Terraform/IaC exposure is a plus. Strong experience implementing observability via monitoring/logging frameworks (Prometheus, Grafana, ELK/Loki, tracing), able to instrument applications, and proven in optimizing distributed system performance. Hands-on with message brokers (RabbitMQ/Kafka) and distributed data stores like Redis, skilled in asynchronous system design and solution selection. Excellent technical communication and leadership, proven ability to lead architectural discussions/build consensus, comfortable driving projects and collaborating with Agile, cross-functional teams. Adept at technical documentation/diagrams, with an analytical mindset for evaluating new technologies and foreseeing design impacts on scalability, security, and maintainability.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As an AI Ops Expert, you will be responsible for the delivery of projects with defined quality standards within set timelines and budget constraints. Your role will involve managing the AI model lifecycle, versioning, and monitoring in production environments. You will be tasked with building resilient MLOps pipelines and ensuring adherence to governance standards. Additionally, you will design, implement, and oversee AIops solutions to automate and optimize AI/ML workflows. Collaboration with data scientists, engineers, and stakeholders will be essential to ensure seamless integration of AI/ML models into production systems. Monitoring and maintaining the health and performance of AI/ML systems, as well as developing and maintaining CI/CD pipelines for AI/ML models, will also be part of your responsibilities. Troubleshooting and resolving issues related to AI/ML infrastructure and workflows will require your expertise, along with staying updated on the latest AI Ops, MLOps, and Kubernetes tools and technologies. To be successful in this role, you must possess a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field, along with at least 8 years of relevant experience. Your proven experience in AIops, MLOps, or related fields will be crucial. Proficiency in Python and hands-on experience with Fast API are required, as well as strong expertise in Docker and Kubernetes (or AKS). Familiarity with MS Azure and its AI/ML services, including Azure ML Flow, is essential. Additionally, you should be proficient in using DevContainer for development and have knowledge of CI/CD tools like Jenkins, Argo CD, Helm, GitHub Actions, or Azure DevOps. Experience with containerization and orchestration tools, Infrastructure as Code (Terraform or equivalent), strong problem-solving skills, and excellent communication and collaboration abilities are also necessary. Preferred skills for this role include experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn, as well as familiarity with data engineering tools like Apache Kafka, Apache Spark, or similar. Knowledge of monitoring and logging tools such as Prometheus, Grafana, or ELK stack, along with an understanding of data versioning tools like DVC or MLflow, would be advantageous. Proficiency in Azure-specific tools and services like Azure Machine Learning (Azure ML), Azure DevOps, Azure Kubernetes Service (AKS), Azure Functions, Azure Logic Apps, Azure Data Factory, Azure Monitor, and Application Insights is also preferred. Joining our team at Socit Gnrale will provide you with the opportunity to be part of a dynamic environment where your contributions can make a positive impact on the future. You will have the chance to innovate, collaborate, and grow in a supportive and stimulating setting. Our commitment to diversity and inclusion, as well as our focus on ESG principles and responsible practices, ensures that you will have the opportunity to contribute meaningfully to various initiatives and projects aimed at creating a better future for all. If you are looking to be directly involved, develop your expertise, and be part of a team that values collaboration and innovation, you will find a welcoming and fulfilling environment with us at Socit Gnrale.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

This position is for Ultimo Software solutions Pvt Ltd (Ultimosoft.com). You will be working as a Java/Scala Developer with the following responsibilities and requirements: - Advanced proficiency in one or more programming languages such as Java and Scala, along with database skills. - Hands-on experience as a Scala/Spark developer. - Self-rated Scala proficiency should be a minimum of 8 out of 10. - Proficiency in automation and continuous delivery methods, along with a deep understanding of the Software Development Life Cycle. - Strong knowledge of agile methodologies like CI/CD, Application Resiliency, and Security. - Demonstrated expertise in software applications and technical processes in areas like cloud, artificial intelligence, machine learning, or mobile development. - Experience with Java Spring Boot, Data Bricks, and a minimum of 7+ years of professional software engineering experience. - Proven skills in Java, J2EE, Spring Boot, JPA, Axon, and Kafka. - Familiarity with Maven and Gradle build tools, as well as the Kafka ecosystem, including Kafka Streams library and Kafka Avro schemas. - Providing end-to-end support for complex enterprise applications. - Strong problem-solving, analytical, and communication skills. - Work experience in Agile environments with a continuous delivery mindset. - Understanding of microservices architecture and distributed system design patterns. - Knowledge of CI/CD pipelines, DevOps practices, Redis cache, Redis Insight tool, Grafana, Grafana Loki logging, Prometheus monitoring, Jenkins, ArgoCD, and Kubernetes.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

As a core member of Periscope's technology team at McKinsey, you will play a vital role in developing and deploying enterprise products, ensuring the firm stays at the forefront of technology. Your responsibilities will include software development projects, focusing on building and improving deployment pipelines, automation, and toolsets for cloud-based solutions on AWS, GCP, and Azure platforms. You will also delve into database management, Kubernetes cluster setup, performance tuning, and continuous delivery domains. Your role will involve hands-on software development, spending approximately 80% of your time on tasks related to deployment pipelines and cloud-based solutions. You will continually expand your expertise by experimenting with new technologies, frameworks, and approaches independently. Additionally, your strong understanding of agile engineering practices will enable you to guide teams on enhancing their engineering processes. In this position, you will not only contribute to software development projects but also provide coaching and mentoring to other DevOps engineers to enhance the organizational capability. Your base will be in either Bangalore or Gurugram office as a member of Periscope's technology team within McKinsey. With Periscope being McKinsey's asset-based arm in the Marketing & Sales practice, your role signifies the firm's commitment to innovation and delivering exceptional client solutions. By combining consulting approaches with solutions, Periscope aims to provide actionable insights that drive revenue growth and optimize various aspects of commercial decision-making. The Periscope platform offers a unique blend of intellectual property, prescriptive analytics, and cloud-based tools to deliver over 25 solutions focused on insights and marketing. Your qualifications should include a Bachelor's or Master's degree in computer science or a related field, along with a minimum of 6 years of experience in technology solutions, particularly in microservices architectures and SaaS delivery models. You should demonstrate expertise in working with leading Cloud Providers such as AWS, GCP, and Azure, as well as proficiency in Linux systems and DevOps tools like Ansible, Terraform, Helm, Docker, and Kubernetes. Furthermore, your experience should cover areas such as load balancing, network security, API management, and supporting web front-end and data-intensive applications. Your problem-solving skills, systems design expertise in Python, and strong monitoring and troubleshooting abilities will be essential in contributing to development tasks and ensuring uninterrupted system operation. Your familiarity with automation practices and modern monitoring tools like Elastic, Sentry, and Grafana will also be valuable in upholding system quality attributes. Strong communication skills and the ability to collaborate effectively in a team environment are crucial for conveying technical decisions, fostering alignment, and thriving in high-pressure situations. Your role at Periscope By McKinsey will not only involve technical contributions but also leadership in enhancing the team's capabilities and driving continuous improvement in technology solutions.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an engineer joining Zinier's Customer Engineering team, you will be focusing on a low-code platform. Your role will involve debugging, analyzing JavaScript code, optimizing queries, solving customer-facing issues, and automating routine tasks. You will be responsible for investigating and resolving customer-reported issues in a JavaScript + JSON low-code environment. This includes identifying and fixing bugs, implementing enhancements to enhance product performance, reliability, and usability, and supporting customers globally. Additionally, you will create and maintain documentation related to program development, logic, coding, testing, and changes. Collaboration with cross-functional teams is a key aspect of this role. You will partner with customer success, solution/engineering teams to address issues promptly, provide feedback from field operations to enhance product robustness, and participate in continuous improvement cycles. You should have the ability to drive outcomes, meet delivery milestones, and coordinate effectively across multiple teams. The required skills for this role include a minimum of 3 years of experience in Solution Development or Engineering roles, a strong understanding of JavaScript, JSON handling, and API interactions, proficiency in SQL with the ability to debug query bottlenecks, familiarity with observability stacks like Grafana, Loki, Tempo, and knowledge of AWS. Desirable skills include exposure to the Field Service Management domain, experience in products with workflows, debugging algorithms related to scheduling, or working on backend systems. Joining Zinier offers a unique opportunity to work closely with Solution Architects, influence Product blueprints, and collaborate across the full tech stack. You will have the chance to work on debugging backend services in Java, Spring Boot, explore front-end interfaces in React, and contribute to mobile UI development. Additionally, you will build internal tools, address production issues, and contribute to engineering stability while learning from experienced platform, product, and solution engineers. Being part of a high-impact team at Zinier means bridging engineering and customer experience to enhance product quality and customer trust. The company values learning, ownership, and long-term growth, providing you with a rewarding environment to grow your skills and expertise.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You should have a minimum of 2-6 years of experience with hands-on experience in Docker, Kubernetes, and Cloud products. Your expertise should include a strong background in DevOps/CI/CD process architecture and engineering. You will be responsible for ensuring the availability, reliability, and performance of the OpenShift/Container Platform. Your duties will involve performing patching, software implementations, and upgrades on RedHat OpenShift platforms. Additionally, experience with Grafana, Splunk, and other cloud products is required for this role. To apply for this position, please send your resume to careers@enfystech.com.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

In this role, your responsibilities will include designing and implementing responsive, modular UI components using Angular, HTML5, CSS3, JavaScript (ES6+), and TypeScript with a focus on user experience. You will work closely with backend developers to integrate UI components with Node.js-based or Python-based RESTful APIs, ensuring seamless data flow and interaction. Collaboration with UX designers using tools like Figma, Grafana, and others to translate visual designs into interactive applications will be a key aspect. Ensuring high standards of accessibility, cross-browser compatibility, and performance optimization will be essential. Additionally, you will be responsible for debugging and resolving front-end issues while maintaining a clean, maintainable codebase. Conducting and maintaining unit and integration tests using Jest, Karma, and Angular TestBed to uphold code quality is also part of the role. You are someone who takes initiatives, doesn't wait for instructions, and proactively seeks opportunities to contribute. Adapting quickly to new situations and applying knowledge effectively is a strength. You are able to clearly convey ideas and actively listen to others to complete assigned tasks as planned. For this role, you will need a good understanding of agile development using SCRUM. Experience with Angular Material, Tailwind CSS, or similar design systems is required. Exposure to IIoT protocols such as MQTT, OPC UA, and Modbus is a plus. Knowledge of Docker for containerized development and deployment, experience working in CI/CD pipelines using tools such as Azure DevOps or GitLab CI, awareness of AI-enhanced UIs or data-driven interface design, understanding accessibility standards (WCAG), and familiarity with micro frontend architectures and frontend performance profiling are all essential. Knowledge of design concepts, including an understanding of User Interface (UI) and User Experience (UX, is an advantage. Preferred qualifications that set you apart include a Bachelor's or Master's degree in computer science, Information Technology, or a related field. A minimum of 5 years of experience in front-end development, including 3+ years of hands-on experience with Angular (v10 or above), strong skills in HTML5, CSS3, JavaScript (ES6+), and TypeScript, experience with state management tools (e.g., Redux), proven ability to build and debug responsive and performant UI applications, experience working with REST APIs, familiarity with UI/UX collaboration tools such as Figma, Adobe XD, or Grafana, and a solid understanding of unit testing frameworks and testing best practices. At Emerson, the workplace culture prioritizes valuing, respecting, and empowering every employee to grow. It fosters an environment that encourages innovation, collaboration, and diverse perspectives because great ideas come from great teams. Commitment to ongoing career development and growing an inclusive culture ensures support for your thriving. Whether through mentorship, training, or leadership opportunities, there is an investment in your success to make a lasting impact. Diverse teams working together are seen as key to driving growth and delivering business results. Employee wellbeing is recognized as important, and competitive benefits plans, a variety of medical insurance plans, an Employee Assistance Program, employee resource groups, recognition, and much more are prioritized. The culture offers flexible time-off plans, including paid parental leave (maternal and paternal), vacation, and holiday leave.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

kochi, kerala

On-site

As a Java Backend Developer in our IoT domain team based in Kochi, you will be responsible for designing, developing, and deploying scalable microservices using Spring Boot, SQL databases, and AWS services. Your role will involve leading the backend development team, implementing DevOps best practices, and optimizing cloud infrastructure. Your key responsibilities will include architecting and implementing high-performance, secure backend services using Java (Spring Boot), developing RESTful APIs and event-driven microservices with a focus on scalability and reliability, designing and optimizing SQL databases (PostgreSQL, MySQL), and deploying applications on AWS using services like ECS, Lambda, RDS, S3, and API Gateway. You will also be responsible for implementing CI/CD pipelines, monitoring and improving backend performance, ensuring security best practices, and authentication using OAuth, JWT, and IAM roles. The required skills for this role include proficiency in Java (Spring Boot, Spring Cloud, Spring Security), microservices architecture, API development, SQL (PostgreSQL, MySQL), ORM (JPA, Hibernate), DevOps tools (Docker, Kubernetes, Terraform, CI/CD, GitHub Actions, Jenkins), AWS cloud services (EC2, Lambda, ECS, RDS, S3, IAM, API Gateway, CloudWatch), messaging systems (Kafka, RabbitMQ, SQS, MQTT), testing frameworks (JUnit, Mockito, Integration Testing), and logging & monitoring tools (ELK Stack, Prometheus, Grafana). Preferred skills that would be beneficial for this role include experience in the IoT domain, work experience in startups, event-driven architecture using Apache Kafka, knowledge of Infrastructure as Code (IaC) with Terraform, and exposure to serverless architectures. In return, we offer a competitive salary, performance-based incentives, the opportunity to lead and mentor a high-performing tech team, hands-on experience with cutting-edge cloud and microservices technologies, and a collaborative and fast-paced work environment. If you have any experience in the IoT domain and are looking for a full-time role with a day shift schedule in an in-person work environment, we encourage you to apply for this exciting opportunity in Kochi.,

Posted 2 weeks ago

Apply

12.0 - 20.0 years

0 Lacs

karnataka

On-site

You will be joining a Global Banks GCC in Bengaluru, a strategic technology hub that drives innovation and delivers enterprise-scale solutions across global markets. In this leadership position, you will play a key role in shaping the engineering vision, leading technical solutioning, and developing high-impact platforms that cater to millions of customers. Your responsibilities will revolve around bridging technology and business needs, creating scalable, secure, and modern applications utilizing cloud-native and full-stack technologies to enable cutting-edge digital banking solutions. The ideal candidate for this role is a seasoned engineering leader with extensive hands-on experience in software development, architecture, and solution design. You should possess expertise in full-stack development using technologies such as .NET Core, ReactJS, Node.js, Typescript, Next.js, and Python. Additionally, experience in Microservices and BIAN architecture for financial platforms, a deep understanding of AWS cloud-native development and infrastructure, and proficiency in REST/GraphQL API design, TDD, and secure coding practices are essential. Hands-on experience with tools like GitHub, GitHub Actions, monitoring tools (Prometheus, Grafana), and AI-powered dev tools (e.g., GitHub Copilot) is also desirable. Familiarity with DevOps/DevSecOps pipelines and deployment automation is a plus. Moreover, you should possess excellent problem-solving and leadership skills, coupled with the ability to mentor and enhance team productivity. A degree in Computer Science, IT, or a related field (Bachelors/Masters) will be advantageous for this position.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for designing, developing, and maintaining enterprise-grade search solutions using Apache Solr and SolrCloud. Your key tasks will include developing and optimizing search indexes and schema for various use cases such as product search, document search, or order/invoice search. Additionally, you will be required to integrate Solr with backend systems, databases, and APIs, implementing features like full-text search, faceted search, auto-suggestions, ranking, and relevancy tuning. It will also be part of your role to optimize search performance, indexing throughput, and query response time for efficient results. Your expertise in Apache Solr & SolrCloud, along with a strong understanding of Lucene, inverted index, analyzers, tokenizers, and search relevance tuning will be essential for this position. Proficiency in Java or Python for backend integration and development is required, as well as experience with RESTful APIs, data pipelines, and real-time indexing. Familiarity with Zookeeper, Docker, Kubernetes for SolrCloud deployments, and knowledge of JSON, XML, and schema design in Solr will also be necessary. Furthermore, your responsibilities will include ensuring data consistency and high availability using SolrCloud and Zookeeper for cluster coordination & configuration management. You will be expected to monitor the health of the search system and troubleshoot any issues that may arise in production. Collaboration with product teams, data engineers, and DevOps teams will be crucial for ensuring smooth delivery. Staying updated with new features of Apache Lucene/Solr and recommending improvements will also be part of your role. Preferred qualifications for this position include a Bachelors or Masters degree in Computer Science, Engineering, or a related field. Experience with Elasticsearch or other search technologies will be advantageous, as well as working knowledge of CI/CD pipelines and cloud platforms such as Azure. Overall, your role will involve working on search solutions, optimizing performance, ensuring data consistency, and collaborating with cross-functional teams for successful project delivery.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Performance Testing and Automation Expert at Amdocs in Pune, India, you will be responsible for leading performance and non-functional testing to ensure the best quality of Amdocs products, covering aspects such as performance, stability, scalability, and high availability readiness. Your role will involve working with various technologies and tools, including Docker, OpenShift/Kubernetes, No SQL DB, Kafka, Couchbase, RESTful services, M2E modeled API services (SOAP), JSON, Kibana/Elastic Search, Grafana, GIT, SQL, Agile, SOAP/REST webservices, Microservices, Jenkins, AWS or similar cloud-based infrastructure, visualization, VMWare platform, LoadRunner, JMeter, network protocols/layers, Linux system/processes monitoring, Perl/Ksh/Bash programming, Java, storage concepts, and AI tools. You should have a minimum of 4+ years of experience in performance testing and automation, with expertise in troubleshooting, root cause analysis, monitoring server performance counters, and OS alert logs. Additionally, familiarity with different network protocols/layers, systems/applications tuning, code profiling, and experience with AI tools, chat for coding, and technology searches will be advantageous. Your responsibilities will include gaining knowledge of product architecture, technology, and business requirements, owning non-functional test designs, planning scope, and preparing demos, conducting performance testing, measuring product resources, code profiling, tuning, sizing, and delivering results and issue analysis. You should have a proven track record of working under tight timelines, developing automation and simulators, enhancing product monitoring tools, and using performance profiling tools. Your role at Amdocs will involve assessing software, suggesting improvements in design or implementation to enhance stability, performance, and cost-effectiveness, and working as part of a team of skilled performance analysts and tuning experts. Amdocs is a global company that empowers customers to provide next-generation communication and media experiences through innovative software products and services. As an equal opportunity employer, Amdocs values diversity and inclusivity in its workforce. In this job, you will have the opportunity to identify, analyze, and mitigate non-functional issues and technical risks of software systems while participating in formal and informal reviews with stakeholders, contributing to the overall success of Amdocs and its customers.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

The Modern Data Company is seeking a skilled and experienced DevOps Engineer to join our team. As a DevOps Engineer, you will play a crucial role in designing, implementing, and managing CI/CD pipelines for automated deployments. You will be responsible for maintaining and optimizing cloud infrastructure for scalability and performance, ensuring system security, monitoring, and incident response using industry best practices, and automating infrastructure provisioning using tools such as Terraform, Ansible, or similar technologies. Collaboration with development teams to streamline release processes, improve system reliability, troubleshoot and resolve infrastructure and deployment issues efficiently, implement containerization and orchestration (Docker, Kubernetes), and drive cost optimization strategies for cloud infrastructure will be key responsibilities in this role. The ideal candidate will have at least 5 years of experience in DevOps, SRE, or Cloud Engineering roles, with strong expertise in CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Proficiency in cloud platforms like AWS, Azure, or GCP, Infrastructure as Code tools like Terraform, CloudFormation, or Ansible, Kubernetes, Docker, monitoring and logging tools, scripting skills for automation, networking, security best practices, and Linux administration are required. Experience working in agile development environments is a plus. Nice to have skills include experience with serverless architectures, microservices, database management, performance tuning, and exposure to AI/ML deployment pipelines. In return, we offer a competitive salary and benefits package, the opportunity to work on cutting-edge AI technologies and products, a collaborative and innovative work environment, professional development opportunities, and career growth. If you are passionate about AI and data products, and eager to work in a dynamic team environment to make a significant impact in the world of AI and data, we encourage you to apply now and join our team at The Modern Data Company.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Senior DevOps Engineer at TechBlocks, you will be responsible for designing and managing robust, scalable CI/CD pipelines, automating infrastructure with Terraform, and improving deployment efficiency across GCP-hosted environments. With 5-8 years of experience in DevOps engineering roles, your expertise in CI/CD, infrastructure automation, and Kubernetes will be crucial for the success of our projects. In this role, you will own the CI/CD strategy and configuration, implement DevSecOps practices, and drive an automation-first culture within the team. Your key responsibilities will include designing and implementing end-to-end CI/CD pipelines using tools like Jenkins, GitHub Actions, and Argo CD for production-grade deployments. You will also define branching strategies and workflow templates for development teams, automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests, and manage secrets lifecycle using Vault for secure deployments. Collaborating with engineering leads, you will review deployment readiness, ensure quality gates are met, and integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. Monitoring infrastructure health and capacity planning using tools like Prometheus, Grafana, and Datadog, you will implement alerting rules, auto-scaling, self-healing, and resilience strategies in Kubernetes. Additionally, you will drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers. Your role will be pivotal in ensuring the reliability, scalability, and security of our systems while fostering a culture of innovation and continuous learning within the team. TechBlocks is a global digital product engineering company with 16+ years of experience, helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. We believe in the power of technology and the impact it can have when coupled with a talented team. Join us at TechBlocks and be part of a dynamic, fast-moving environment where big ideas turn into real impact, shaping the future of digital transformation.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Senior DevOps Engineer at Simform, you will play a crucial role in managing and modernizing medium-sized, multi-tier environments on Azure cloud infrastructure. Your passion for automation and scalability will be key as you design, implement, and maintain CI/CD pipelines for efficient and secure application deployment. Collaboration with development teams will ensure alignment with DevOps best practices, while your expertise in Azure cloud services and Kubernetes will drive infrastructure development and operational support, ensuring high availability and performance. Key Responsibilities: - Design, implement, and maintain CI/CD pipelines for efficient and secure application deployment. - Lead infrastructure development and operational support in Azure, ensuring high availability and performance. - Work with modern containerization technologies like Docker, Docker Compose, and orchestrators like Kubernetes. - Implement and follow Git best practices across teams. - Use Infrastructure as Code (IaC) tools, preferably Terraform, to provision cloud resources. - Plan and enforce DevSecOps processes and ensure security compliance throughout the software development lifecycle. - Develop and monitor application and infrastructure performance using tools like Azure Monitor, Managed Grafana, and other observability tools. - Drive multi-tenancy SaaS architecture implementation and deployment best practices. - Troubleshoot issues across development, testing, and production environments. - Provide leadership in infrastructure planning, design reviews, and incident management. Required Skills & Experience: - 4+ years of relevant hands-on DevOps experience. - Strong communication and interpersonal skills. - Proficiency in Azure cloud services, with a working knowledge of AWS also preferred. - Expertise in Kubernetes, including deploying, scaling, and maintaining clusters. - Experience with web servers like Nginx and Apache. - Familiarity with the Well-Architected Framework (Azure). - Practical experience with monitoring and observability tools in Azure. - Working knowledge of DevSecOps tools and security best practices. - Proven debugging and troubleshooting skills across infrastructure and applications. - Experience supporting multi-tenant SaaS platforms is a plus. - Experience in application performance monitoring and tuning. - Experience with Azure dashboarding, logging, and monitoring tools, such as Managed Grafana, is a strong advantage. Preferred Qualifications: - Azure certifications (e.g., AZ-400, AZ-104) - Experience in cloud migration and application modernization - Familiarity with tools like Prometheus, Grafana, ELK stack, or similar - Leadership experience or mentoring junior engineers Join us at Simform to be part of a young team with a thriving culture, offering well-balanced learning and growth opportunities, free health insurance, and various employee benefits such as flexible work timing, WFH and hybrid options, and sponsorship for certifications/events.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chandigarh

On-site

As a DevOps Engineer, you will be responsible for designing, implementing, and managing CI/CD pipelines to streamline software development and deployment processes. Your role will involve overseeing Jenkins management for continuous integration and automation, as well as deploying and managing cloud infrastructure using AWS services. Additionally, you will configure and optimize brokers such as RabbitMQ, Kafka, or similar messaging systems to ensure efficient communication between microservices. Monitoring, troubleshooting, and enhancing system performance, security, and reliability will also be key aspects of your responsibilities. Collaboration with developers, QA, and IT teams is essential to optimize development workflows effectively. To excel in this role, you are required to have an AWS Certification (preferably AWS Certified DevOps Engineer, Solutions Architect, or equivalent) and strong experience in CI/CD pipeline automation using tools like Jenkins, GitLab CI/CD, or GitHub Actions. Proficiency in Jenkins management, including installation, configuration, and troubleshooting, is necessary. Knowledge of brokers for messaging and event-driven architectures, hands-on experience with containerization tools like Docker, and proficiency in scripting and automation (e.g., Python, Bash) are also essential. Experience with monitoring and logging tools such as Prometheus, Grafana, ELK stack, or CloudWatch, along with an understanding of networking, security, and cloud best practices, will be beneficial. Preferred skills for this role include experience in mobile and web application development environments and familiarity with Agile and DevOps methodologies. This is a full-time position with benefits such as paid sick time, paid time off, and a performance bonus. The work schedule is during the day shift from Monday to Friday, and the work location is in person.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a DevOps/Data Platform Engineer at our company, you will play a crucial role in supporting our cloud-based infrastructure, CI/CD pipelines, and data platform operations. Your responsibilities will include building and managing CI/CD pipelines using tools like GitHub Actions, Jenkins, and Build Piper, provisioning infrastructure with Terraform, Ansible, and cloud services, as well as managing containers using Docker, Kubernetes, and registries. You will also be required to support data platforms such as Snowflake, Databricks, and data infrastructure, and monitor systems using Grafana, Prometheus, and Datadog. Additionally, securing environments using tools like AWS Secrets Manager and enabling MLOps pipelines will be part of your daily tasks to optimize infrastructure for performance and cost. To excel in this role, you should have strong experience in DevOps, cloud platforms, and Infrastructure as Code (IaC). Proficiency in Linux, automation, and system management is essential, along with familiarity with data platforms like Snowflake, Databricks, and CI/CD practices. Excellent troubleshooting skills, collaboration abilities, and effective communication are also required for this position. It would be advantageous if you have experience with ML model deployment, cost optimization, and infrastructure testing. Familiarity with data security best practices is considered a plus. This is a full-time, permanent position located in Gurgaon. If you possess the required experience and skills and are capable of meeting the outlined responsibilities, we encourage you to apply for this position.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies