Jobs
Interviews

101 Cloud Monitoring Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

15 - 30 Lacs

hyderabad

Work from Office

We are looking for a highly skilled Principal / Senior Cloud Dev Engineer to design, build, and maintain scalable, cloud-native backend infrastructure and SaaS solutions. This role is ideal for engineers passionate about distributed systems, microservices, and cloud technologies. Roles & Responsibilities Design, develop, and maintain backend infrastructure for PaaS enabling scalable, cloud-native, elastic graph database services. Contribute to SaaS feature development, architecture design, and operational support for enterprise-grade deployments. Work on Cloud Billing, Cloud org/user management, Cloud stack/DB monitoring/alerting, Cloud GUI (admin portal, graph insights). Manage and optimize Cloud Kubernetes deployments. Proficiency Required Strong coding skills in at least one of Go, Java, C++, Python, C. Expertise in microservice architectures and large-scale distributed systems. Hands-on with relational databases (MySQL, Postgres, etc.). Experience with AWS/GCP tech stack. Solid knowledge of Linux/Unix systems and scripting. Kubernetes & Cloud Computing expertise. Nice to Have Experience with containers & Kubernetes operations. Knowledge of cloud operations & troubleshooting. Cloud certifications (AWS, Azure, GCP).

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You are a highly skilled GCP DevOps Engineer joining the Cloud Infrastructure team, responsible for designing, implementing, and managing scalable, secure, and reliable infrastructure on Google Cloud Platform (GCP). You will collaborate with development teams to streamline CI/CD processes, automate cloud operations, and support application deployments using modern DevOps practices. Your key responsibilities include designing, building, and managing cloud infrastructure and services in GCP. You will implement and maintain CI/CD pipelines using tools like Cloud Build, Jenkins, GitLab CI/CD, or GitHub Actions, and automate provisioning using Infrastructure as Code (IaC) tools such as Terraform, Deployment Manager, or Ansible. Monitoring system health, performance, and availability using GCP-native tools or third-party solutions, improving system reliability, automating incident response, and collaborating with developers to optimize applications for scalability and performance in cloud environments are crucial aspects of your role. Implementing security best practices, managing IAM policies and roles, managing containerized applications using Kubernetes (GKE) or other container orchestration platforms, and maintaining documentation related to architecture, configurations, and processes are also part of your responsibilities. You should possess a Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent experience, with at least 5 years of DevOps or SRE experience and a focus of at least 3 years on GCP. Proficiency in using GCP services such as Compute Engine, Cloud Functions, GKE, Cloud SQL, BigQuery, Pub/Sub, Cloud Storage, and VPC networking is required. Strong experience with Terraform, Helm, or similar IaC and configuration management tools, hands-on experience with Docker and Kubernetes (preferably GKE), knowledge of Linux systems administration and scripting (Python, Bash, or Go), familiarity with CI/CD tools, and a solid understanding of networking concepts, DNS, load balancing, firewalls, and VPNs are essential. Experience with monitoring, logging, and alerting tools is also necessary. This is a full-time, permanent position that requires your physical presence at the Gurgaon location.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Database Analyst / Database Engineer specializing in AWS RDS & DMS, you will be responsible for managing and optimizing our cloud-based database infrastructure, particularly on AWS. With 6 to 8 years of experience in database management, AWS services, and software troubleshooting, you will play a crucial role in ensuring the availability, scalability, and reliability of our database systems. Your key responsibilities will include provisioning, configuring, and managing AWS RDS and Aurora instances for MySQL, PostgreSQL, and MSSQL databases. You will also be tasked with performing scheduled patching, version upgrades, backup validations, and disaster recovery testing to maintain database integrity and security. In this role, you will work closely with application teams to enhance database access patterns, enforce role-based access control, data encryption, and activity auditing. You will utilize your expertise in SQL performance tuning, backup and disaster recovery strategies, and cloud monitoring to optimize database performance under various workloads. Furthermore, you will collaborate with cross-functional teams including DevOps, Developers, and Architects to design and implement cutover strategies, migration validations, and schema optimizations. Your proficiency in AWS tools such as IAM, KMS, Secrets Manager, and CloudTrail will be essential in ensuring compliance and security across database platforms. As a trusted advisor, you will provide insights on schema design, data modeling, and optimal data access strategies. Your experience in database mirroring, log shipping, and failover clustering for MSSQL databases will be instrumental in maintaining high availability setups and addressing replication issues promptly. The ideal candidate for this role will hold a B.Tech / B.E. / MCA qualification and possess strong communication and analytical skills. If you are passionate about database management, AWS services, and cloud architecture, and are looking for a challenging opportunity in Pune or Hyderabad, we invite you to apply for this full-time position.,

Posted 1 month ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Description AWS Cloud Operation -Required Type of Load balancer and its specific usage scenario Virtual Machine creation steps Load Balancer Creation Steps Purpose of a NAT options of load balancer rules and NAT rule Difference between Load Balancer and NAT rule Provide a scenario and ask then how do they go about resolving it Security rules Ability to solve operational problems. Strong understanding of service scalability especially in relation to performance, reliability and cost. Strong awareness of all cloud services. VPC creation steps. Top 5 services creation steps. Terraform - Desired Terraform flow Purpose of different commands in Terraform Scenarios like state file deletion Write code for creating VPC subnet etc Cloud Monitoring - Required What monitoring system they are familiar with Explain how monitoring is configured Knowledge & Configuration on cloud native monitoring system Handling tickets and Infra - Desired How they handled Incident, Changes, Request Change Process awareness What do they normally do when they are not able to resolve an issue themselves Examples of coordination with the user Initiate incident response for P1 security incidents. Infra Knowhow - Desired Troubleshooting steps for an scenario in infrastructure (Utilization issue, connectivity issue etc) Unable to login to a particular server, how do you go about resolving issues Scenarios on Logs and events Job scheduling - Desired Kubernetes - Required Your future duties and responsibilities Required Qualifications To Be Successful In This Role Together, as owners, lets turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, youll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. Thats why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our companys strategy and direction. Your work creates value. Youll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. Youll shape your career by joining a company built to grow and last. Youll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our teamone of the largest IT and business consulting services firms in the world. Show more Show less

Posted 1 month ago

Apply

10.0 - 12.0 years

10 - 12 Lacs

Gurgaon, Haryana, India

On-site

We are seeking an experienced and highly skilled Senior AWS Engineer to join our dynamic team. You will be responsible for designing, implementing, and maintaining robust, scalable, and secure cloud-native solutions, with a strong focus on serverless architectures and infrastructure as code. This role requires a deep understanding of core AWS services and a commitment to implementing security, cost optimization, and CI/CD best practices. Roles & Responsibilities: Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications, focusing on serverless architecture and event-driven design . Architect and develop solutions using a wide array of core AWS services , including Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD . Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray , and custom metrics. Enforce stringent security best practices, including the establishment of robust IAM roles , PHI/PII tagging, secure configurations with Cognito and KMS , and adherence to HIPAA standards . Proactively identify and implement strategies for AWS cost optimization , including S3 lifecycle policies and leveraging serverless tiers. Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs) , retry/backoff mechanisms, and circuit breakers. Contribute to the design and evolution of CI/CD pipelines , ensuring automated, efficient, and reliable software delivery. Create clear and comprehensive technical documentation for architectures, workflows, and operational procedures. Collaborate effectively with cross-functional teams to deliver high-quality solutions. Advocate for and ensure adherence to AWS Best Practices across all development and operational activities. Skills Required: Deep expertise in a wide range of AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, and KMS. Proficiency in Infrastructure as Code (IaC) with AWS CDK . Extensive experience with Serverless Architecture and Event-Driven Design . Strong understanding of Cloud Monitoring & Observability tools like CloudWatch Logs and X-Ray. Proven ability to implement and enforce Security & Compliance measures , including IAM roles, PHI/PII tagging, and HIPAA standards. Demonstrated experience with Cost Optimization techniques. Expertise in designing and implementing Scalability & Resilience patterns . Familiarity with CI/CD Pipeline concepts . Excellent documentation, workflow design, and cross-functional collaboration skills. Commitment to implementing AWS Best Practices . QUALIFICATION: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 1 month ago

Apply

10.0 - 12.0 years

10 - 12 Lacs

Delhi, India

On-site

We are seeking an experienced and highly skilled Senior AWS Engineer to join our dynamic team. You will be responsible for designing, implementing, and maintaining robust, scalable, and secure cloud-native solutions, with a strong focus on serverless architectures and infrastructure as code. This role requires a deep understanding of core AWS services and a commitment to implementing security, cost optimization, and CI/CD best practices. Roles & Responsibilities: Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications, focusing on serverless architecture and event-driven design . Architect and develop solutions using a wide array of core AWS services , including Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD . Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray , and custom metrics. Enforce stringent security best practices, including the establishment of robust IAM roles , PHI/PII tagging, secure configurations with Cognito and KMS , and adherence to HIPAA standards . Proactively identify and implement strategies for AWS cost optimization , including S3 lifecycle policies and leveraging serverless tiers. Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs) , retry/backoff mechanisms, and circuit breakers. Contribute to the design and evolution of CI/CD pipelines , ensuring automated, efficient, and reliable software delivery. Create clear and comprehensive technical documentation for architectures, workflows, and operational procedures. Collaborate effectively with cross-functional teams to deliver high-quality solutions. Advocate for and ensure adherence to AWS Best Practices across all development and operational activities. Skills Required: Deep expertise in a wide range of AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, and KMS. Proficiency in Infrastructure as Code (IaC) with AWS CDK . Extensive experience with Serverless Architecture and Event-Driven Design . Strong understanding of Cloud Monitoring & Observability tools like CloudWatch Logs and X-Ray. Proven ability to implement and enforce Security & Compliance measures , including IAM roles, PHI/PII tagging, and HIPAA standards. Demonstrated experience with Cost Optimization techniques. Expertise in designing and implementing Scalability & Resilience patterns . Familiarity with CI/CD Pipeline concepts . Excellent documentation, workflow design, and cross-functional collaboration skills. Commitment to implementing AWS Best Practices . QUALIFICATION: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 1 month ago

Apply

10.0 - 12.0 years

10 - 12 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking an experienced and highly skilled Senior AWS Engineer to join our dynamic team. You will be responsible for designing, implementing, and maintaining robust, scalable, and secure cloud-native solutions, with a strong focus on serverless architectures and infrastructure as code. This role requires a deep understanding of core AWS services and a commitment to implementing security, cost optimization, and CI/CD best practices. Roles & Responsibilities: Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications, focusing on serverless architecture and event-driven design . Architect and develop solutions using a wide array of core AWS services , including Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD . Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray , and custom metrics. Enforce stringent security best practices, including the establishment of robust IAM roles , PHI/PII tagging, secure configurations with Cognito and KMS , and adherence to HIPAA standards . Proactively identify and implement strategies for AWS cost optimization , including S3 lifecycle policies and leveraging serverless tiers. Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs) , retry/backoff mechanisms, and circuit breakers. Contribute to the design and evolution of CI/CD pipelines , ensuring automated, efficient, and reliable software delivery. Create clear and comprehensive technical documentation for architectures, workflows, and operational procedures. Collaborate effectively with cross-functional teams to deliver high-quality solutions. Advocate for and ensure adherence to AWS Best Practices across all development and operational activities. Skills Required: Deep expertise in a wide range of AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, and KMS. Proficiency in Infrastructure as Code (IaC) with AWS CDK . Extensive experience with Serverless Architecture and Event-Driven Design . Strong understanding of Cloud Monitoring & Observability tools like CloudWatch Logs and X-Ray. Proven ability to implement and enforce Security & Compliance measures , including IAM roles, PHI/PII tagging, and HIPAA standards. Demonstrated experience with Cost Optimization techniques. Expertise in designing and implementing Scalability & Resilience patterns . Familiarity with CI/CD Pipeline concepts . Excellent documentation, workflow design, and cross-functional collaboration skills. Commitment to implementing AWS Best Practices . QUALIFICATION: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Support Engineer with experience in maintaining and supporting solutions in a Cloud based environment (GCP or AWS), you will be responsible for ensuring the smooth operation of monitoring tools such as ELK, Dynamiter, Cloud watch, Cloud logging, Cloud Monitoring, New Relic. Your primary focus will be to implement and maintain monitoring and self-healing strategies to proactively prevent production incidents. You will also be required to conduct root cause analysis of production issues and design on call and escalation processes. In addition, you will participate in the design and implementation of serviceability solutions for monitoring and alerting, as well as debugging production issues across services and levels of the stack. Collaborating closely with the platform engineering team, you will help establish and improve production support approaches and participate in defining SLIs and SLOs to demonstrate efficiency and value to business partners. Your responsibilities will also include interacting and testing APIs, participating in Out-of-business-hour deployments and support on rotation with team members, and being familiar with agile development techniques. L3 Support experience is considered an asset for this role. In return, we offer competitive salaries, comprehensive health benefits, flexible work hours, remote work options, professional development and training opportunities, and a supportive and inclusive work environment.,

Posted 1 month ago

Apply

7.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrowpeople with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Description Looking for an experienced GCP Cloud/DevOps Engineer and or OpenShift to design, implement, and manage cloud infrastructure and services across multiple environments. This role requires deep expertise in Google Cloud Platform (GCP) services, DevOps practices, and Infrastructure as Code (IaC). Candidate will be deploying, automating, and maintaining high-availability systems, and implementing best practices for cloud architecture, security, and DevOps pipelines. Requirements Bachelor&aposs or master&aposs degree in computer science, Information Technology, or a similar field Must have 7 + years of extensive experience in designing, implementing, and maintaining applications on GCP and OpenShift Comprehensive expertise in GCP services such as GKE, Cloudrun, Functions, Cloud SQL, Firestore, Firebase, Apigee, GCP App Engine, Gemini Code Assist, Vertex AI, Spanner, Memorystore, Service Mesh, and Cloud Monitoring Solid understanding of cloud security best practices and experience in implementing security controls in GCP Thorough understanding of cloud architecture principles and best practices Experience with automation and configuration management tools like Terraform and a sound understanding of DevOps principles Proven leadership skills and the ability to mentor and guide a technical team Key Responsibilities Cloud Infrastructure Design and Deployment: Architect, design, and implement scalable, reliable, and secure solutions on GCP. Deploy and manage GCP services in both development and production environments, ensuring seamless integration with existing infrastructure. Implement and manage core services such as BigQuery, Datafusion, Cloud Composer (Airflow), Cloud Storage, Data Fusion, Compute Engine, App Engine, Cloud Functions and more. Infrastructure as Code (IaC) and Automation Develop and maintain infrastructure as code using Terraform or CLI scripts to automate provisioning and configuration of GCP resources. Establish and document best practices for IaC to ensure consistent and efficient deployments across environments. DevOps And CI/CD Pipeline Development Create and manage DevOps pipelines for automated build, test, and release management, integrating with tools such as Jenkins, GitLab CI/CD, or equivalent. Work with development and operations teams to optimize deployment workflows, manage application dependencies, and improve delivery speed. Security And IAM Management Handle user and service account management in Google Cloud IAM. Set up and manage Secrets Manager and Cloud Key Management for secure storage of credentials and sensitive information. Implement network and data security best practices to ensure compliance and security of cloud resources. Performance Monitoring And Optimization Monitoring & Security: Set up observability tools like Prometheus, Grafana, and integrate security tools (e.g., SonarQube, Trivy). Networking & Storage: Configure DNS, networking, and persistent storage solutions in Kubernetes. Set up monitoring and logging (e.g., Cloud Monitoring, Cloud Logging, Error Reporting) to ensure systems perform optimally. Troubleshoot and resolve issues related to cloud services and infrastructure as they arise. Workflow Orchestration Orchestrate complex workflows using Argo Workflow Engine. Containerization: Work extensively with Docker for containerization and image management. Optimization: Troubleshoot and optimize containerized applications for performance and security. Technical Skills Expertise with GCP and OCP (OpenShift) services, including but not limited to Compute Engine, Kubernetes Engine (GKE), BigQuery, Cloud Storage, Pub/Sub, Datafusion, Airflow, Cloud Functions, and Cloud SQL. Proficiency in scripting languages like Python, Bash, or PowerShell for automation. Familiarity with DevOps tools and CI/CD processes (e.g. GitLab CI, Cloud Build, Azure DevOps, Jenkins) Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

The ideal candidate should possess a minimum of 2 years of experience in the field. You should be able to join immediately or within a maximum of 30 days. Your responsibilities will include: - Having hands-on experience with network, security, infrastructure, and cloud monitoring and observability tools such as NMS, ManageEngine, SIEM, SolarWinds, motadata, etc. - Understanding network monitoring protocols like SNMP, Syslog, NetFlow, etc. - Knowledge of various Microsoft authentication and monitoring protocol methods including WMI, WinRM, LDAP, Kerberos, NTML, Basic, etc. - Understanding of Windows and Linux operating systems, infrastructure monitoring such as SCCM, and web server performance monitoring. - Familiarity with network security products like Firewall, Proxy, Load balancers, WAF. - Understanding of SAN, NAS, and RAID technologies. - Knowledge of SQL, MySQL deployment, and monitoring performance statistics. - Understanding of infrastructure, application availability SLA, and experience in troubleshooting performance challenges and suggesting best practices. - Developing platform-specific alerts and reports based on customer requirements. If you meet the above criteria and are looking for a challenging opportunity, we would like to hear from you.,

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Back Job Description Experience maintaining and supporting solutions in a Cloud based environment (GCP or AWS) Experience working with various monitoring tools. (ELK, Dynamiter, Cloud watch, Cloud logging, Cloud Monitoring, New Relic) Ensure monitoring and self-healing strategies are implemented and maintained to prevent production incidents proactively. Perform root cause analysis of production issues Design and manage on call and escalation process. Participate in the designing and implementation of serviceability solutions for monitoring and alerting. Debug production issues across services and levels of the stack. Participate in the definition of SLIs and SLOs to demonstrate maturity, efficiency, and value to our business partners. Collaborate closely with the platform engineering team to establish and improve production support approaches. Participate on Out-of-business-hour deployments and support (Rotation with team members). Familiar and comfortable with agile development techniques. Experience interacting and testing APIs Requirement Perform root cause analysis of production issues Design and manage on call and escalation processes Participate in the designing and implementation of observability solutions for monitoring and alerting. Debug production issues across services and levels of the stack. Participate in the definition of SLIs and SLOs to demonstrate maturity, efficiency, and value to our business partners Collaborate closely with the platform engineering team to establish and improve production support approaches L3 Support experience is an asset. What We Offer Competitive salaries and comprehensive health benefits Flexible work hours and remote work options. Professional development and training opportunities. A supportive and inclusive work environment Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

At NiCE, we embrace challenges by pushing our limits and striving for excellence. We are a team of ambitious individuals who are dedicated to being game changers in our industry. If you share our passion for setting high standards and exceeding them, we have the perfect career opportunity that will ignite your professional growth. As a Strong Robotic Process Automation Engineer at NiCE, you will collaborate with Professional Services teams, Solution Architects, and Engineering teams to oversee the onboarding of On-prem to Azure Cloud and automation of customer data ingestion solutions. Working closely with US and Pune Cloud Services and Operations Team, as well as support teams worldwide, you will play a key role in designing and implementing Robotic Process Automation workflows for both attended and unattended processes. Your responsibilities will include enhancing cloud automation workflows, improving cloud monitoring and self-healing capabilities, and ensuring the reliability, scalability, and security of our infrastructure. We value innovative ideas, flexible work methods, knowledge collaboration, and positive vibes within our team culture. Key Responsibilities: - Implement custom deployments and data migration to Azure for NICE Public Safety product suites. - Develop and maintain Robotic Process Automation for customer onboarding, deployment, and testing processes. - Integrate NICE's applications with customers" on-prem and cloud-based third-party tools. - Track effort on tasks accurately and collaborate effectively with cross-functional teams. - Adhere to best practices, quality standards, and guidelines throughout all project phases. - Travel to customer sites when necessary and conduct work professionally and efficiently. Qualifications: - College degree in Computer Science preferred. - Strong English verbal and written communication skills. - Proficiency in Java, C#, SQL, Linux, and Microsoft Server. - Experience with enterprise software integration. - Excellent organizational and analytical skills. - Ability to work well in a team environment and prioritize tasks effectively. - Fast learner with a proactive approach to learning new technologies. - Capacity to multitask and remain focused under pressure. Join NiCE, a global company that is reshaping the market with a team of top talents who thrive in a fast-paced, collaborative, and innovative environment. As a NiCEr, you will have endless opportunities for career growth and development across various roles and locations. If you are driven by passion, innovation, and continuous improvement, NiCE is the place for you! NiCE-FLEX Hybrid Model: NiCE operates on the NiCE-FLEX hybrid model, offering maximum flexibility with 2 days of office work and 3 days of remote work each week. Office days focus on face-to-face interactions, fostering teamwork, creativity, and innovation. Requisition ID: 8072 Reporting into: Director Role Type: Individual Contributor About NiCE: NICELtd. (NASDAQ: NICE) is a global leader in software products used by over 25,000 businesses worldwide. Our solutions are trusted by 85 of the Fortune 100 corporations to deliver exceptional customer experiences, combat financial crime, and ensure public safety. With over 8,500 employees across 30+ countries, NiCE is known for its innovation in AI, cloud, and digital technologies.,

Posted 1 month ago

Apply

10.0 - 15.0 years

8 - 12 Lacs

Greater Noida

Work from Office

Job Description: Build, provision and maintain Linux Operating system on bare-metal Cisco UCS blades for Oracle RAC infrastructure. Deploy and Manage Linux & Windows operating system hosts on VMware and RHV/OVM/oVirt/KVM infrastructure. Deploy VMs on cloud infrastructure like OCI/Azure and apply SEI compliance & hardening customizations. Create Terraform Code to deploy VMs and application in OCI/Azure/AWS cloud and on-prem VMWare/KVM environment Design, implement and manage the lifecycle of automation/orchestration software like Ansible, Terraform, Jenkins, Gitlab Strong automation expertise with scripting languages like bash, perl, Ansible, Python Act as last level of escalation on the Technical services team for all System related issues Work with other infrastructure engineers to resolve Networking and Storage subsystem related issues Analyze production environment to detect critical deficiencies and recommend solutions for improvement Ensure that the operating system adheres to SEI security and compliance requirements Monitor the infrastructure and respond to alerts and automate monitoring tools. Skills Bachelor’s Degree in Computer Science in related discipline or equivalent work experience. A minimum of 5 years supporting multiple environments such as production, pre-production and development. This role is a very hands-on role with shell, Python scripting and automation of Infrastructure using Ansible and Terraform. Working experience with Git for code management and follow the DevOps principles. Follow the infrastructure as code principal and develop an automation to daily and repeatable setup and configuration. Install and configure Linux on UCS blades, VMware, KVM/oVirt based virtualization platforms Bare metal installation of UCS nodes and good understanding in finding hardware related issues and work with datacenter team to address them Good Understanding of TCP/IP stack, Network vlans, subnets and troubleshoot firewall, network issues occur in environment and work with network team to resolve them Good understanding on SAN and NAS environment with multipath/powerpath support. Support ORACLE Database RAC environment. Work with L1 and L2 teams to troubleshoot

Posted 1 month ago

Apply

12.0 - 15.0 years

55 - 60 Lacs

Ahmedabad, Chennai, Bengaluru

Work from Office

Dear Candidate, Were hiring a Cloud Network Engineer to design and manage secure, performant cloud networks. Key Responsibilities: Design VPCs, subnets, and routing policies. Configure load balancers, firewalls, and VPNs. Optimize traffic flow and network security. Required Skills & Qualifications: Experience with cloud networking in AWS/Azure/GCP. Understanding of TCP/IP, DNS, VPNs. Familiarity with tools like Palo Alto, Cisco, or Fortinet. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced Cloud Monitoring & SOC Specialist, you will be leading the optimization and integration of the monitoring ecosystem. Your passion for transforming data into actionable insights and reducing alert fatigue will be instrumental in this role. Your responsibilities will include consolidating and integrating various tools such as SolarWinds, Instana, Google Cloud Operations, VMware Log Insight, and Rapid7 into a unified monitoring ecosystem. You will architect clear and efficient monitoring and incident-response workflows, implementing centralized AI-driven alerting to minimize noise and accelerate detection. In addition, you will be responsible for developing methods for proactive monitoring and continuous improvement by learning from incidents and iterating on processes. Configuring and maintaining essential NOC/SOC dashboards and monthly capacity reports for leadership visibility will also be part of your role. To qualify for this position, you should have deep technical expertise with 8-10 years of experience in monitoring architecture, tool integration, and SOC operations. Hands-on experience with infrastructure monitoring, APM, cloud (GCP), centralized logging, and SIEM solutions is required. Familiarity with tools such as SolarWinds, Instana, Google Cloud Operations, VMware Log Insight, and Rapid7 is considered a strong advantage. A proven track record of designing effective alert rules, incident-response playbooks, and automated workflows is essential. Experience in writing and refining monitoring procedures, SLAs, runbooks, and regular capacity/performance reports is also required. Strong communication skills and the ability to collaborate with DevOps, SecOps, and IT teams to drive continuous improvement are key attributes for success in this role.,

Posted 2 months ago

Apply

10.0 - 12.0 years

30 - 35 Lacs

Noida

Work from Office

About the Role : We are seeking an experienced and highly skilled Senior AWS Engineer with over 10 years of professional experience to join our dynamic and growing team. This is a fully remote position, requiring strong expertise in serverless architectures, AWS services, and infrastructure as code. You will play a pivotal role in designing, implementing, and maintaining robust, scalable, and secure cloud solutions. Key Responsibilities : - Design & Implementation : Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications leveraging a wide array of AWS services, with a strong focus on serverless architecture and event-driven design. - AWS Services Expertise : Architect and develop solutions using core AWS services including AWS Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. - Infrastructure as Code (IaC) : Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent, repeatable, and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD. - Serverless & Event-Driven Design : Champion serverless patterns and event-driven architectures to build highly efficient and decoupled systems. - Cloud Monitoring & Observability : Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray, and custom metrics to proactively identify and resolve issues, ensuring optimal application performance and health. - Security & Compliance : Enforce stringent security best practices, including the establishment of robust IAM roles and boundaries, PHI/PII tagging, secure configurations with Cognito and KMS, and adherence to HIPAA standards. Implement isolation patterns and fine-grained access control mechanisms. - Cost Optimization : Proactively identify and implement strategies for AWS cost optimization, including S3 lifecycle policies, leveraging serverless tiers, and strategic service selection (e.g., evaluating Amazon Pinpoint vs. SES based on cost-effectiveness). - Scalability & Resilience : Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs), retry/backoff mechanisms, and circuit breakers to ensure high availability and fault tolerance. - CI/CD Pipeline : Contribute to the design and evolution of CI/CD pipelines, ensuring automated, efficient, and reliable software delivery. - Documentation & Workflow Design : Create clear, concise, and comprehensive technical documentation for architectures, workflows, and operational procedures. - Cross-Functional Collaboration : Collaborate effectively with cross-functional teams, including developers, QA, and product managers, to deliver high-quality solutions. - AWS Best Practices : Advocate for and ensure adherence to AWS best practices across all development and operational activities. Required Skills & Experience : of hands-on experience as an AWS Engineer or similar role. - Deep expertise in AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS. - Proficiency in Infrastructure as Code (IaC) with AWS CDK; experience with CodePipeline is a significant plus. - Extensive experience with Serverless Architecture & Event-Driven Design. - Strong understanding of Cloud Monitoring & Observability tools : CloudWatch Logs, X-Ray, Custom Metrics. - Proven ability to implement and enforce Security & Compliance measures, including IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, and Access Control. - Demonstrated experience with Cost Optimization techniques (S3 lifecycle policies, serverless tiers, service selection). - Expertise in designing and implementing Scalability & Resilience patterns (auto-scaling, DLQs, retry/backoff, circuit breakers). - Familiarity with CI/CD Pipeline Concepts. - Excellent Documentation & Workflow Design skills. - Exceptional Cross-Functional Collaboration abilities. - Commitment to implementing AWS Best Practices.

Posted 2 months ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Bengaluru

Work from Office

Proficiency in Google Cloud Platform (GCP) services, including Dataflow , DataStream , Dataproc , Big Query , and Cloud Storage . Strong experience with Apache Spark and Apache Flink for distributed data processing. Knowledge of real-time data streaming technologies (e.g., Apache Kafka , Pub/Sub ). Familiarity with data orchestration tools like Apache Airflow or Cloud Composer . Expertise in Infrastructure as Code (IaC) tools like Terraform or Cloud Deployment Manager . Experience with CI/CD tools like Jenkins , GitLab CI/CD , or Cloud Build . Knowledge of containerization and orchestration tools like Docker and Kubernetes . Strong scripting skills for automation (e.g., Bash , Python ). Experience with monitoring tools like Cloud Monitoring , Prometheus , and Grafana . Familiarity with logging tools like Cloud Logging or ELK Stack .

Posted 2 months ago

Apply

10.0 - 12.0 years

30 - 35 Lacs

Pune

Work from Office

About the Role : We are seeking an experienced and highly skilled Senior AWS Engineer with over 10 years of professional experience to join our dynamic and growing team. This is a fully remote position, requiring strong expertise in serverless architectures, AWS services, and infrastructure as code. You will play a pivotal role in designing, implementing, and maintaining robust, scalable, and secure cloud solutions. Key Responsibilities : - Design & Implementation : Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications leveraging a wide array of AWS services, with a strong focus on serverless architecture and event-driven design. - AWS Services Expertise : Architect and develop solutions using core AWS services including AWS Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. - Infrastructure as Code (IaC) : Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent, repeatable, and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD. - Serverless & Event-Driven Design : Champion serverless patterns and event-driven architectures to build highly efficient and decoupled systems. - Cloud Monitoring & Observability : Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray, and custom metrics to proactively identify and resolve issues, ensuring optimal application performance and health. - Security & Compliance : Enforce stringent security best practices, including the establishment of robust IAM roles and boundaries, PHI/PII tagging, secure configurations with Cognito and KMS, and adherence to HIPAA standards. Implement isolation patterns and fine-grained access control mechanisms. - Cost Optimization : Proactively identify and implement strategies for AWS cost optimization, including S3 lifecycle policies, leveraging serverless tiers, and strategic service selection (e.g., evaluating Amazon Pinpoint vs. SES based on cost-effectiveness). - Scalability & Resilience : Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs), retry/backoff mechanisms, and circuit breakers to ensure high availability and fault tolerance. - CI/CD Pipeline : Contribute to the design and evolution of CI/CD pipelines, ensuring automated, efficient, and reliable software delivery. - Documentation & Workflow Design : Create clear, concise, and comprehensive technical documentation for architectures, workflows, and operational procedures. - Cross-Functional Collaboration : Collaborate effectively with cross-functional teams, including developers, QA, and product managers, to deliver high-quality solutions. - AWS Best Practices : Advocate for and ensure adherence to AWS best practices across all development and operational activities. Required Skills & Experience : of hands-on experience as an AWS Engineer or similar role. - Deep expertise in AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS. - Proficiency in Infrastructure as Code (IaC) with AWS CDK; experience with CodePipeline is a significant plus. - Extensive experience with Serverless Architecture & Event-Driven Design. - Strong understanding of Cloud Monitoring & Observability tools : CloudWatch Logs, X-Ray, Custom Metrics. - Proven ability to implement and enforce Security & Compliance measures, including IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, and Access Control. - Demonstrated experience with Cost Optimization techniques (S3 lifecycle policies, serverless tiers, service selection). - Expertise in designing and implementing Scalability & Resilience patterns (auto-scaling, DLQs, retry/backoff, circuit breakers). - Familiarity with CI/CD Pipeline Concepts. - Excellent Documentation & Workflow Design skills. - Exceptional Cross-Functional Collaboration abilities. - Commitment to implementing AWS Best Practices.

Posted 2 months ago

Apply

6.0 - 11.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:Java AWS DeveloperExperience6-12 YearsLocation:Bangalore : : Experience in Java, J2ee, Spring boot. Experience in Design, Kubernetes, AWS (EKS, EC2) is needed. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Experience with Web Services SOA experience (SOAP as well as Restful with JSON formats), with Messaging (Kafka). Hands on with development and test automation tools/frameworks (e.g. BDD and Cucumber)

Posted 2 months ago

Apply

4.0 - 7.0 years

22 - 25 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Sr. Development Engineer Cloud Backend Company: Bluecopa Location: Onsite Hyderabad, India Industry: Financial Services Function: Information Technology Experience Required: 47 years Salary Range: ?22 ?25 LPA Working Days: 5 days/week Education: Bachelor's Degree (Graduation mandatory) Age Limit: Up to 35 years About the Company Bluecopa is a fast-growing financial operations automation platform helping finance teams eliminate spreadsheet chaos. The platform integrates real-time data, automates workflows, and delivers insights for smarter business decisions. Backed by top-tier investors and driven by a leadership team with deep enterprise software expertise. Role Overview As a Sr. Development Engineer Cloud Backend , you'll play a pivotal role in designing and developing scalable backend services and deploying them on cloud-native platforms. You'll be part of a highly skilled team working with Kubernetes and major cloud platforms like GCP (preferred) , AWS , Azure , or OCI . Key Responsibilities Develop scalable and reliable backend systems and cloud-native applications. Build and maintain RESTful APIs, microservices, and asynchronous systems. Manage deployments and operations on Kubernetes. Implement CI/CD pipelines and infrastructure automation. Collaborate closely with DevOps, frontend, and product teams. Ensure clean, maintainable code through testing and documentation. Mandatory Skills Kubernetes: Minimum 2 years hands-on production experience. Cloud Platforms: Deep expertise in one (GCP preferred), AWS, Azure, or OCI. Backend Programming: Proficient in Python, Java, or Kotlin (at least one). Strong backend architecture and microservices experience. Docker, containerization, and cloud-native deployments. Preferred Skills Experience with multiple cloud platforms . Infrastructure-as-Code: Terraform , CloudFormation . Observability: Prometheus , Grafana , Cloud Monitoring . Experience with data platforms like BigQuery or Snowflake . Nice-to-Have Exposure to NoSQL , event-driven architectures, or serverless frameworks. Experience with managed data pipelines or data lake technologies.

Posted 2 months ago

Apply

0.0 years

9 - 14 Lacs

Noida

Work from Office

Required Skills: GCP Proficiency Strong expertise in Google Cloud Platform (GCP) services and tools. Strong expertise in Google Cloud Platform (GCP) services and tools, including Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, Cloud SQL, Cloud Load Balancing, IAM, Google Workflows, Google Cloud Pub/Sub, App Engine, Cloud Functions, Cloud Run, API Gateway, Cloud Build, Cloud Source Repositories, Artifact Registry, Google Cloud Monitoring, Logging, and Error Reporting. Cloud-Native Applications Experience in designing and implementing cloud-native applications, preferably on GCP. Workload Migration Proven expertise in migrating workloads to GCP. CI/CD Tools and Practices Experience with CI/CD tools and practices. Python and IaC Proficiency in Python and Infrastructure as Code (IaC) tools such as Terraform. Responsibilities: Cloud Architecture and Design Design and implement scalable, secure, and highly available cloud infrastructure solutions using Google Cloud Platform (GCP) services and tools such as Compute Engine, Kubernetes Engine, Cloud Storage, Cloud SQL, and Cloud Load Balancing. Cloud-Native Applications Design Development of high-level architecture design and guidelines for develop, deployment and life-cycle management of cloud-native applications on CGP, ensuring they are optimized for security, performance and scalability using services like App Engine, Cloud Functions, and Cloud Run. API ManagementDevelop and implement guidelines for securely exposing interfaces exposed by the workloads running on GCP along with granular access control using IAM platform, RBAC platforms and API Gateway. Workload Migration Lead the design and migration of on-premises workloads to GCP, ensuring minimal downtime and data integrity. Skills (competencies)

Posted 2 months ago

Apply

8.0 - 10.0 years

45 - 55 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Navplus) (*Note: This is a requirement for one of Uplers' client - Emedgene - An illumina company) What do you need for this opportunity? Must have skills required: Cloud monitoring, DSL, CI/CD, pytest, REST API, Docker, Kubernetes, MySQL, Python, SaaS Emedgene - An illumina company is Looking for: Automation Software Engineer Lead Emedgene utilizes artificial intelligence and genomic data science to accelerate medical research and guide healthcare decisions at an unprecedented scale. Our technology is rapidly being adopted by leading medical centers, research institutes, and clinical laboratories and is helping to save and improve lives every day. We are looking for the best and the brightest to share our innovative technology with the world. Position Summary This is not a traditional QA role. We are seeking a highly skilled software engineer with a strong foundation in Python and advanced software engineering concepts to design and build a domain-specific language (DSL) for automating complex testing scenarios. This role focuses on engineering solutions, not just writing test scripts, and requires a deep understanding of Pythons advanced features and modern software design. Responsibilities Architect and implement a custom automation framework that extends beyond traditional test scripts, including designing a DSL for automating manual test workflows. Drive the development of advanced testing solutions leveraging Pythons core features such as metaprogramming, decorators, hooks, and concurrency. Develop scalable and maintainable test frameworks and integrate them into a robust CI/CD pipeline. Collaborate with development and product teams to review specifications (SRS) and ensure test automation aligns with system design and product goals. Optimize performance and reliability of test execution across APIs, databases, and microservices. Develop strategies to increase automation coverage across multiple layers, including integration and system-level testing. Mentor team members in advanced Python techniques, best practices, and automation design patterns. Continuously analyze and improve testing workflows and processes for greater efficiency and scalability. Qualifications Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. 8+ years of software engineering experience with at least 4+ years of advanced Python development, including experience with metaprogramming, concurrency (e.g., asyncio, threading), and Python internals. Expertise in building frameworks with pytest, including advanced use of hooks, fixtures, and plugins. Strong understanding of REST API testing, including schema validation, HTTP protocols, and error handling. Proficient in RDBMS concepts, preferably MySQL, including schema design, query optimization, and performance tuning. Hands-on experience with CI/CD pipelines and automation tools (e.g., Jenkins, GitHub Actions). Familiarity with cloud platforms such as AWS and cloud monitoring tools like CloudWatch. Strong understanding of Agile methodologies and experience working in a fast-paced, iterative development environment. Exceptional problem-solving and analytical skills with a focus on system-wide impact and performance optimization. Preferred Skills Experience with designing domain-specific languages (DSLs) or other advanced automation frameworks. Familiarity with containerized environments (e.g., Docker, Kubernetes) and distributed systems testing. Experience with SaaS-based testing solutions and large-scale data processing systems. Why Join Us Be part of a forward-thinking team developing industry-leading healthcare solutions. Work on challenging projects that directly impact lives. Collaborate with talented individuals in a dynamic and innovative environment. Familiarity with Agile development methodologies and a track record of delivering high-quality software in a fast-paced environment. Familiarity with SaaS and cloud platform tools such as AWS cloudwatch How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview!

Posted 2 months ago

Apply

7.0 - 10.0 years

16 - 27 Lacs

Bengaluru

Work from Office

Cloud Monitoring and Compliance Engineer - AM - BLR - J49183 You will have a wide range of responsibilities which will include: Working alongside the content management team to provide visibility of compliance to security guardrails. Customizing and enhancing Cloud Security Posture Management and Cloud Workflow Protection (Microsoft Defender for Cloud features) to meet KPMG specific requirements. Onboarding additional Tenants and cloud hosting providers to the service. Planning and implementation of automated remediation activities. Liaising with vendors to fully realize investment into their products and influence future roadmaps. Day to day management, troubleshooting and housekeeping of the toolsets. Collaborating with other GISG teams to understand their requirements and look for new opportunities for the Service. Ensuring work is completed in such a way to comply with established compliance and other internal control requirements. Using DevOps to record all project tasks. Minimum of 7 years in IT with 4+ years of experience working with a major cloud service provider. Bachelors Degree from an accredited college or university or equivalent work experience. Preferably in Computer Science or related field. Robust technical and implementation knowledge of Cloud Security Posture Management technologies (Microsoft MDC, Twistlock, Redlock). Experienced in securing cloud environments and cloud systems, including topics around certification and compliance. Good understanding of API based security & compliance standards. Understanding of exploits, malware, ransomware, etc. their creation and activation and detection methods. Knowledge of web application architecture and system administration. Experienced building complex custom RQL, KQL or SQL queries. Experienced with Microsoft Azure, AWS or GCP installation, configuration, and administration of security features and services. Programming experience with Python or PowerShell. Qualification - BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MBA,MCA

Posted 2 months ago

Apply

11.0 - 16.0 years

11 - 16 Lacs

Gurgaon, Haryana, India

On-site

How You Will Make an Impact/ Job Responsibilities: Design, implement, and maintain highly available, scalable, and reliable infrastructure Working closely with client and delivering outcomes per the SoW and SLA Architect, design, develop, deploy and document new systems or maintain existing systems on Cloud Platforms based on specifications. Collaborate with, provide continuous guidance to, and influence design decisions of various Agile development teams for various existing and new products and solutions w.r.t. Cloud enablement, migration and infrastructure needs, to adopt and promote Cloud best practices, and economies of scale. Engage customers understand their use cases and operational needs, as well as engage with Cloud Platform vendors to understand their technology roadmaps and evolution and translate to comprehensive technology guidance for various product teams. Design for Cloud security and compliance ensure solutions and operations reliability. Drive and conduct audits on current cloud utilization and recommend cloud optimization and security changes. Act as a mentor and technical consultant for the various teams. Desired/Preferred Skills: Responsibilities: Platform Management : Oversee the architecture and management of GCP resources, particularly BigQuery, Databricks, and Kubernetes. Architecting Solutions : Design scalable and efficient data architectures tailored to specific business needs. Optimization of Workloads : Analyze and optimize data processing workloads to improve performance and reduce costs. Performance Measurement : Implement monitoring and performance metrics for data pipelines and applications. Cost Monitoring : Develop strategies for cost-effective resource usage, including budget tracking and optimization techniques. Collaboration : Work closely with data engineers, developers, and stakeholders to ensure seamless integration and functionality. Required Skills: GCP Expertise : In-depth knowledge of Google Cloud Platform services, particularly BigQuery, Databricks, and Kubernetes. Data Engineering : Experience with data modeling, ETL processes, and data warehousing. Performance Tuning : Proven track record in optimizing data processing and query performance in BigQuery. Kubernetes Management : Proficient in deploying and managing containerized applications using Kubernetes. Cost Management : Familiarity with GCP billing and cost management tools to monitor and optimize cloud expenditure. Monitoring Tools : Experience with GCP monitoring and logging tools (e.g., Stackdriver, Cloud Monitoring). Years of Experience: 11 to 16 years experience 7+ years of experience in GCP Solution Architect. Education Qualification: B.E / B.Tech, BCA, MCA equivalent GCP - Professional Cloud Architect

Posted 2 months ago

Apply

11.0 - 16.0 years

11 - 16 Lacs

Chennai, Tamil Nadu, India

On-site

How You Will Make an Impact/ Job Responsibilities: Design, implement, and maintain highly available, scalable, and reliable infrastructure Working closely with client and delivering outcomes per the SoW and SLA Architect, design, develop, deploy and document new systems or maintain existing systems on Cloud Platforms based on specifications. Collaborate with, provide continuous guidance to, and influence design decisions of various Agile development teams for various existing and new products and solutions w.r.t. Cloud enablement, migration and infrastructure needs, to adopt and promote Cloud best practices, and economies of scale. Engage customers understand their use cases and operational needs, as well as engage with Cloud Platform vendors to understand their technology roadmaps and evolution and translate to comprehensive technology guidance for various product teams. Design for Cloud security and compliance ensure solutions and operations reliability. Drive and conduct audits on current cloud utilization and recommend cloud optimization and security changes. Act as a mentor and technical consultant for the various teams. Desired/Preferred Skills: Responsibilities: Platform Management : Oversee the architecture and management of GCP resources, particularly BigQuery, Databricks, and Kubernetes. Architecting Solutions : Design scalable and efficient data architectures tailored to specific business needs. Optimization of Workloads : Analyze and optimize data processing workloads to improve performance and reduce costs. Performance Measurement : Implement monitoring and performance metrics for data pipelines and applications. Cost Monitoring : Develop strategies for cost-effective resource usage, including budget tracking and optimization techniques. Collaboration : Work closely with data engineers, developers, and stakeholders to ensure seamless integration and functionality. Required Skills: GCP Expertise : In-depth knowledge of Google Cloud Platform services, particularly BigQuery, Databricks, and Kubernetes. Data Engineering : Experience with data modeling, ETL processes, and data warehousing. Performance Tuning : Proven track record in optimizing data processing and query performance in BigQuery. Kubernetes Management : Proficient in deploying and managing containerized applications using Kubernetes. Cost Management : Familiarity with GCP billing and cost management tools to monitor and optimize cloud expenditure. Monitoring Tools : Experience with GCP monitoring and logging tools (e.g., Stackdriver, Cloud Monitoring). Years of Experience: 11 to 16 years experience 7+ years of experience in GCP Solution Architect. Education Qualification: B.E / B.Tech, BCA, MCA equivalent GCP - Professional Cloud Architect

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies