Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
The ideal candidate should possess a minimum of 2 years of experience in the field. You should be able to join immediately or within a maximum of 30 days. Your responsibilities will include: - Having hands-on experience with network, security, infrastructure, and cloud monitoring and observability tools such as NMS, ManageEngine, SIEM, SolarWinds, motadata, etc. - Understanding network monitoring protocols like SNMP, Syslog, NetFlow, etc. - Knowledge of various Microsoft authentication and monitoring protocol methods including WMI, WinRM, LDAP, Kerberos, NTML, Basic, etc. - Understanding of Windows and Linux operating systems, infrastructure monitoring such as SCCM, and web server performance monitoring. - Familiarity with network security products like Firewall, Proxy, Load balancers, WAF. - Understanding of SAN, NAS, and RAID technologies. - Knowledge of SQL, MySQL deployment, and monitoring performance statistics. - Understanding of infrastructure, application availability SLA, and experience in troubleshooting performance challenges and suggesting best practices. - Developing platform-specific alerts and reports based on customer requirements. If you meet the above criteria and are looking for a challenging opportunity, we would like to hear from you.,
Posted 1 day ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Back Job Description Experience maintaining and supporting solutions in a Cloud based environment (GCP or AWS) Experience working with various monitoring tools. (ELK, Dynamiter, Cloud watch, Cloud logging, Cloud Monitoring, New Relic) Ensure monitoring and self-healing strategies are implemented and maintained to prevent production incidents proactively. Perform root cause analysis of production issues Design and manage on call and escalation process. Participate in the designing and implementation of serviceability solutions for monitoring and alerting. Debug production issues across services and levels of the stack. Participate in the definition of SLIs and SLOs to demonstrate maturity, efficiency, and value to our business partners. Collaborate closely with the platform engineering team to establish and improve production support approaches. Participate on Out-of-business-hour deployments and support (Rotation with team members). Familiar and comfortable with agile development techniques. Experience interacting and testing APIs Requirement Perform root cause analysis of production issues Design and manage on call and escalation processes Participate in the designing and implementation of observability solutions for monitoring and alerting. Debug production issues across services and levels of the stack. Participate in the definition of SLIs and SLOs to demonstrate maturity, efficiency, and value to our business partners Collaborate closely with the platform engineering team to establish and improve production support approaches L3 Support experience is an asset. What We Offer Competitive salaries and comprehensive health benefits Flexible work hours and remote work options. Professional development and training opportunities. A supportive and inclusive work environment Show more Show less
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
At NiCE, we embrace challenges by pushing our limits and striving for excellence. We are a team of ambitious individuals who are dedicated to being game changers in our industry. If you share our passion for setting high standards and exceeding them, we have the perfect career opportunity that will ignite your professional growth. As a Strong Robotic Process Automation Engineer at NiCE, you will collaborate with Professional Services teams, Solution Architects, and Engineering teams to oversee the onboarding of On-prem to Azure Cloud and automation of customer data ingestion solutions. Working closely with US and Pune Cloud Services and Operations Team, as well as support teams worldwide, you will play a key role in designing and implementing Robotic Process Automation workflows for both attended and unattended processes. Your responsibilities will include enhancing cloud automation workflows, improving cloud monitoring and self-healing capabilities, and ensuring the reliability, scalability, and security of our infrastructure. We value innovative ideas, flexible work methods, knowledge collaboration, and positive vibes within our team culture. Key Responsibilities: - Implement custom deployments and data migration to Azure for NICE Public Safety product suites. - Develop and maintain Robotic Process Automation for customer onboarding, deployment, and testing processes. - Integrate NICE's applications with customers" on-prem and cloud-based third-party tools. - Track effort on tasks accurately and collaborate effectively with cross-functional teams. - Adhere to best practices, quality standards, and guidelines throughout all project phases. - Travel to customer sites when necessary and conduct work professionally and efficiently. Qualifications: - College degree in Computer Science preferred. - Strong English verbal and written communication skills. - Proficiency in Java, C#, SQL, Linux, and Microsoft Server. - Experience with enterprise software integration. - Excellent organizational and analytical skills. - Ability to work well in a team environment and prioritize tasks effectively. - Fast learner with a proactive approach to learning new technologies. - Capacity to multitask and remain focused under pressure. Join NiCE, a global company that is reshaping the market with a team of top talents who thrive in a fast-paced, collaborative, and innovative environment. As a NiCEr, you will have endless opportunities for career growth and development across various roles and locations. If you are driven by passion, innovation, and continuous improvement, NiCE is the place for you! NiCE-FLEX Hybrid Model: NiCE operates on the NiCE-FLEX hybrid model, offering maximum flexibility with 2 days of office work and 3 days of remote work each week. Office days focus on face-to-face interactions, fostering teamwork, creativity, and innovation. Requisition ID: 8072 Reporting into: Director Role Type: Individual Contributor About NiCE: NICELtd. (NASDAQ: NICE) is a global leader in software products used by over 25,000 businesses worldwide. Our solutions are trusted by 85 of the Fortune 100 corporations to deliver exceptional customer experiences, combat financial crime, and ensure public safety. With over 8,500 employees across 30+ countries, NiCE is known for its innovation in AI, cloud, and digital technologies.,
Posted 1 week ago
10.0 - 15.0 years
8 - 12 Lacs
Greater Noida
Work from Office
Job Description: Build, provision and maintain Linux Operating system on bare-metal Cisco UCS blades for Oracle RAC infrastructure. Deploy and Manage Linux & Windows operating system hosts on VMware and RHV/OVM/oVirt/KVM infrastructure. Deploy VMs on cloud infrastructure like OCI/Azure and apply SEI compliance & hardening customizations. Create Terraform Code to deploy VMs and application in OCI/Azure/AWS cloud and on-prem VMWare/KVM environment Design, implement and manage the lifecycle of automation/orchestration software like Ansible, Terraform, Jenkins, Gitlab Strong automation expertise with scripting languages like bash, perl, Ansible, Python Act as last level of escalation on the Technical services team for all System related issues Work with other infrastructure engineers to resolve Networking and Storage subsystem related issues Analyze production environment to detect critical deficiencies and recommend solutions for improvement Ensure that the operating system adheres to SEI security and compliance requirements Monitor the infrastructure and respond to alerts and automate monitoring tools. Skills Bachelor’s Degree in Computer Science in related discipline or equivalent work experience. A minimum of 5 years supporting multiple environments such as production, pre-production and development. This role is a very hands-on role with shell, Python scripting and automation of Infrastructure using Ansible and Terraform. Working experience with Git for code management and follow the DevOps principles. Follow the infrastructure as code principal and develop an automation to daily and repeatable setup and configuration. Install and configure Linux on UCS blades, VMware, KVM/oVirt based virtualization platforms Bare metal installation of UCS nodes and good understanding in finding hardware related issues and work with datacenter team to address them Good Understanding of TCP/IP stack, Network vlans, subnets and troubleshoot firewall, network issues occur in environment and work with network team to resolve them Good understanding on SAN and NAS environment with multipath/powerpath support. Support ORACLE Database RAC environment. Work with L1 and L2 teams to troubleshoot
Posted 1 week ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, Were hiring a Cloud Network Engineer to design and manage secure, performant cloud networks. Key Responsibilities: Design VPCs, subnets, and routing policies. Configure load balancers, firewalls, and VPNs. Optimize traffic flow and network security. Required Skills & Qualifications: Experience with cloud networking in AWS/Azure/GCP. Understanding of TCP/IP, DNS, VPNs. Familiarity with tools like Palo Alto, Cisco, or Fortinet. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced Cloud Monitoring & SOC Specialist, you will be leading the optimization and integration of the monitoring ecosystem. Your passion for transforming data into actionable insights and reducing alert fatigue will be instrumental in this role. Your responsibilities will include consolidating and integrating various tools such as SolarWinds, Instana, Google Cloud Operations, VMware Log Insight, and Rapid7 into a unified monitoring ecosystem. You will architect clear and efficient monitoring and incident-response workflows, implementing centralized AI-driven alerting to minimize noise and accelerate detection. In addition, you will be responsible for developing methods for proactive monitoring and continuous improvement by learning from incidents and iterating on processes. Configuring and maintaining essential NOC/SOC dashboards and monthly capacity reports for leadership visibility will also be part of your role. To qualify for this position, you should have deep technical expertise with 8-10 years of experience in monitoring architecture, tool integration, and SOC operations. Hands-on experience with infrastructure monitoring, APM, cloud (GCP), centralized logging, and SIEM solutions is required. Familiarity with tools such as SolarWinds, Instana, Google Cloud Operations, VMware Log Insight, and Rapid7 is considered a strong advantage. A proven track record of designing effective alert rules, incident-response playbooks, and automated workflows is essential. Experience in writing and refining monitoring procedures, SLAs, runbooks, and regular capacity/performance reports is also required. Strong communication skills and the ability to collaborate with DevOps, SecOps, and IT teams to drive continuous improvement are key attributes for success in this role.,
Posted 2 weeks ago
10.0 - 12.0 years
30 - 35 Lacs
Noida
Work from Office
About the Role : We are seeking an experienced and highly skilled Senior AWS Engineer with over 10 years of professional experience to join our dynamic and growing team. This is a fully remote position, requiring strong expertise in serverless architectures, AWS services, and infrastructure as code. You will play a pivotal role in designing, implementing, and maintaining robust, scalable, and secure cloud solutions. Key Responsibilities : - Design & Implementation : Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications leveraging a wide array of AWS services, with a strong focus on serverless architecture and event-driven design. - AWS Services Expertise : Architect and develop solutions using core AWS services including AWS Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. - Infrastructure as Code (IaC) : Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent, repeatable, and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD. - Serverless & Event-Driven Design : Champion serverless patterns and event-driven architectures to build highly efficient and decoupled systems. - Cloud Monitoring & Observability : Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray, and custom metrics to proactively identify and resolve issues, ensuring optimal application performance and health. - Security & Compliance : Enforce stringent security best practices, including the establishment of robust IAM roles and boundaries, PHI/PII tagging, secure configurations with Cognito and KMS, and adherence to HIPAA standards. Implement isolation patterns and fine-grained access control mechanisms. - Cost Optimization : Proactively identify and implement strategies for AWS cost optimization, including S3 lifecycle policies, leveraging serverless tiers, and strategic service selection (e.g., evaluating Amazon Pinpoint vs. SES based on cost-effectiveness). - Scalability & Resilience : Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs), retry/backoff mechanisms, and circuit breakers to ensure high availability and fault tolerance. - CI/CD Pipeline : Contribute to the design and evolution of CI/CD pipelines, ensuring automated, efficient, and reliable software delivery. - Documentation & Workflow Design : Create clear, concise, and comprehensive technical documentation for architectures, workflows, and operational procedures. - Cross-Functional Collaboration : Collaborate effectively with cross-functional teams, including developers, QA, and product managers, to deliver high-quality solutions. - AWS Best Practices : Advocate for and ensure adherence to AWS best practices across all development and operational activities. Required Skills & Experience : of hands-on experience as an AWS Engineer or similar role. - Deep expertise in AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS. - Proficiency in Infrastructure as Code (IaC) with AWS CDK; experience with CodePipeline is a significant plus. - Extensive experience with Serverless Architecture & Event-Driven Design. - Strong understanding of Cloud Monitoring & Observability tools : CloudWatch Logs, X-Ray, Custom Metrics. - Proven ability to implement and enforce Security & Compliance measures, including IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, and Access Control. - Demonstrated experience with Cost Optimization techniques (S3 lifecycle policies, serverless tiers, service selection). - Expertise in designing and implementing Scalability & Resilience patterns (auto-scaling, DLQs, retry/backoff, circuit breakers). - Familiarity with CI/CD Pipeline Concepts. - Excellent Documentation & Workflow Design skills. - Exceptional Cross-Functional Collaboration abilities. - Commitment to implementing AWS Best Practices.
Posted 2 weeks ago
10.0 - 15.0 years
35 - 40 Lacs
Bengaluru
Work from Office
Proficiency in Google Cloud Platform (GCP) services, including Dataflow , DataStream , Dataproc , Big Query , and Cloud Storage . Strong experience with Apache Spark and Apache Flink for distributed data processing. Knowledge of real-time data streaming technologies (e.g., Apache Kafka , Pub/Sub ). Familiarity with data orchestration tools like Apache Airflow or Cloud Composer . Expertise in Infrastructure as Code (IaC) tools like Terraform or Cloud Deployment Manager . Experience with CI/CD tools like Jenkins , GitLab CI/CD , or Cloud Build . Knowledge of containerization and orchestration tools like Docker and Kubernetes . Strong scripting skills for automation (e.g., Bash , Python ). Experience with monitoring tools like Cloud Monitoring , Prometheus , and Grafana . Familiarity with logging tools like Cloud Logging or ELK Stack .
Posted 2 weeks ago
10.0 - 12.0 years
30 - 35 Lacs
Pune
Work from Office
About the Role : We are seeking an experienced and highly skilled Senior AWS Engineer with over 10 years of professional experience to join our dynamic and growing team. This is a fully remote position, requiring strong expertise in serverless architectures, AWS services, and infrastructure as code. You will play a pivotal role in designing, implementing, and maintaining robust, scalable, and secure cloud solutions. Key Responsibilities : - Design & Implementation : Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications leveraging a wide array of AWS services, with a strong focus on serverless architecture and event-driven design. - AWS Services Expertise : Architect and develop solutions using core AWS services including AWS Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. - Infrastructure as Code (IaC) : Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent, repeatable, and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD. - Serverless & Event-Driven Design : Champion serverless patterns and event-driven architectures to build highly efficient and decoupled systems. - Cloud Monitoring & Observability : Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray, and custom metrics to proactively identify and resolve issues, ensuring optimal application performance and health. - Security & Compliance : Enforce stringent security best practices, including the establishment of robust IAM roles and boundaries, PHI/PII tagging, secure configurations with Cognito and KMS, and adherence to HIPAA standards. Implement isolation patterns and fine-grained access control mechanisms. - Cost Optimization : Proactively identify and implement strategies for AWS cost optimization, including S3 lifecycle policies, leveraging serverless tiers, and strategic service selection (e.g., evaluating Amazon Pinpoint vs. SES based on cost-effectiveness). - Scalability & Resilience : Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs), retry/backoff mechanisms, and circuit breakers to ensure high availability and fault tolerance. - CI/CD Pipeline : Contribute to the design and evolution of CI/CD pipelines, ensuring automated, efficient, and reliable software delivery. - Documentation & Workflow Design : Create clear, concise, and comprehensive technical documentation for architectures, workflows, and operational procedures. - Cross-Functional Collaboration : Collaborate effectively with cross-functional teams, including developers, QA, and product managers, to deliver high-quality solutions. - AWS Best Practices : Advocate for and ensure adherence to AWS best practices across all development and operational activities. Required Skills & Experience : of hands-on experience as an AWS Engineer or similar role. - Deep expertise in AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS. - Proficiency in Infrastructure as Code (IaC) with AWS CDK; experience with CodePipeline is a significant plus. - Extensive experience with Serverless Architecture & Event-Driven Design. - Strong understanding of Cloud Monitoring & Observability tools : CloudWatch Logs, X-Ray, Custom Metrics. - Proven ability to implement and enforce Security & Compliance measures, including IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, and Access Control. - Demonstrated experience with Cost Optimization techniques (S3 lifecycle policies, serverless tiers, service selection). - Expertise in designing and implementing Scalability & Resilience patterns (auto-scaling, DLQs, retry/backoff, circuit breakers). - Familiarity with CI/CD Pipeline Concepts. - Excellent Documentation & Workflow Design skills. - Exceptional Cross-Functional Collaboration abilities. - Commitment to implementing AWS Best Practices.
Posted 2 weeks ago
6.0 - 11.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Title:Java AWS DeveloperExperience6-12 YearsLocation:Bangalore : : Experience in Java, J2ee, Spring boot. Experience in Design, Kubernetes, AWS (EKS, EC2) is needed. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Experience with Web Services SOA experience (SOAP as well as Restful with JSON formats), with Messaging (Kafka). Hands on with development and test automation tools/frameworks (e.g. BDD and Cucumber)
Posted 2 weeks ago
4.0 - 7.0 years
22 - 25 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Sr. Development Engineer Cloud Backend Company: Bluecopa Location: Onsite Hyderabad, India Industry: Financial Services Function: Information Technology Experience Required: 47 years Salary Range: ?22 ?25 LPA Working Days: 5 days/week Education: Bachelor's Degree (Graduation mandatory) Age Limit: Up to 35 years About the Company Bluecopa is a fast-growing financial operations automation platform helping finance teams eliminate spreadsheet chaos. The platform integrates real-time data, automates workflows, and delivers insights for smarter business decisions. Backed by top-tier investors and driven by a leadership team with deep enterprise software expertise. Role Overview As a Sr. Development Engineer Cloud Backend , you'll play a pivotal role in designing and developing scalable backend services and deploying them on cloud-native platforms. You'll be part of a highly skilled team working with Kubernetes and major cloud platforms like GCP (preferred) , AWS , Azure , or OCI . Key Responsibilities Develop scalable and reliable backend systems and cloud-native applications. Build and maintain RESTful APIs, microservices, and asynchronous systems. Manage deployments and operations on Kubernetes. Implement CI/CD pipelines and infrastructure automation. Collaborate closely with DevOps, frontend, and product teams. Ensure clean, maintainable code through testing and documentation. Mandatory Skills Kubernetes: Minimum 2 years hands-on production experience. Cloud Platforms: Deep expertise in one (GCP preferred), AWS, Azure, or OCI. Backend Programming: Proficient in Python, Java, or Kotlin (at least one). Strong backend architecture and microservices experience. Docker, containerization, and cloud-native deployments. Preferred Skills Experience with multiple cloud platforms . Infrastructure-as-Code: Terraform , CloudFormation . Observability: Prometheus , Grafana , Cloud Monitoring . Experience with data platforms like BigQuery or Snowflake . Nice-to-Have Exposure to NoSQL , event-driven architectures, or serverless frameworks. Experience with managed data pipelines or data lake technologies.
Posted 3 weeks ago
0.0 years
9 - 14 Lacs
Noida
Work from Office
Required Skills: GCP Proficiency Strong expertise in Google Cloud Platform (GCP) services and tools. Strong expertise in Google Cloud Platform (GCP) services and tools, including Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, Cloud SQL, Cloud Load Balancing, IAM, Google Workflows, Google Cloud Pub/Sub, App Engine, Cloud Functions, Cloud Run, API Gateway, Cloud Build, Cloud Source Repositories, Artifact Registry, Google Cloud Monitoring, Logging, and Error Reporting. Cloud-Native Applications Experience in designing and implementing cloud-native applications, preferably on GCP. Workload Migration Proven expertise in migrating workloads to GCP. CI/CD Tools and Practices Experience with CI/CD tools and practices. Python and IaC Proficiency in Python and Infrastructure as Code (IaC) tools such as Terraform. Responsibilities: Cloud Architecture and Design Design and implement scalable, secure, and highly available cloud infrastructure solutions using Google Cloud Platform (GCP) services and tools such as Compute Engine, Kubernetes Engine, Cloud Storage, Cloud SQL, and Cloud Load Balancing. Cloud-Native Applications Design Development of high-level architecture design and guidelines for develop, deployment and life-cycle management of cloud-native applications on CGP, ensuring they are optimized for security, performance and scalability using services like App Engine, Cloud Functions, and Cloud Run. API ManagementDevelop and implement guidelines for securely exposing interfaces exposed by the workloads running on GCP along with granular access control using IAM platform, RBAC platforms and API Gateway. Workload Migration Lead the design and migration of on-premises workloads to GCP, ensuring minimal downtime and data integrity. Skills (competencies)
Posted 1 month ago
8.0 - 10.0 years
45 - 55 Lacs
Mumbai, Bengaluru, Delhi / NCR
Work from Office
Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Navplus) (*Note: This is a requirement for one of Uplers' client - Emedgene - An illumina company) What do you need for this opportunity? Must have skills required: Cloud monitoring, DSL, CI/CD, pytest, REST API, Docker, Kubernetes, MySQL, Python, SaaS Emedgene - An illumina company is Looking for: Automation Software Engineer Lead Emedgene utilizes artificial intelligence and genomic data science to accelerate medical research and guide healthcare decisions at an unprecedented scale. Our technology is rapidly being adopted by leading medical centers, research institutes, and clinical laboratories and is helping to save and improve lives every day. We are looking for the best and the brightest to share our innovative technology with the world. Position Summary This is not a traditional QA role. We are seeking a highly skilled software engineer with a strong foundation in Python and advanced software engineering concepts to design and build a domain-specific language (DSL) for automating complex testing scenarios. This role focuses on engineering solutions, not just writing test scripts, and requires a deep understanding of Pythons advanced features and modern software design. Responsibilities Architect and implement a custom automation framework that extends beyond traditional test scripts, including designing a DSL for automating manual test workflows. Drive the development of advanced testing solutions leveraging Pythons core features such as metaprogramming, decorators, hooks, and concurrency. Develop scalable and maintainable test frameworks and integrate them into a robust CI/CD pipeline. Collaborate with development and product teams to review specifications (SRS) and ensure test automation aligns with system design and product goals. Optimize performance and reliability of test execution across APIs, databases, and microservices. Develop strategies to increase automation coverage across multiple layers, including integration and system-level testing. Mentor team members in advanced Python techniques, best practices, and automation design patterns. Continuously analyze and improve testing workflows and processes for greater efficiency and scalability. Qualifications Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. 8+ years of software engineering experience with at least 4+ years of advanced Python development, including experience with metaprogramming, concurrency (e.g., asyncio, threading), and Python internals. Expertise in building frameworks with pytest, including advanced use of hooks, fixtures, and plugins. Strong understanding of REST API testing, including schema validation, HTTP protocols, and error handling. Proficient in RDBMS concepts, preferably MySQL, including schema design, query optimization, and performance tuning. Hands-on experience with CI/CD pipelines and automation tools (e.g., Jenkins, GitHub Actions). Familiarity with cloud platforms such as AWS and cloud monitoring tools like CloudWatch. Strong understanding of Agile methodologies and experience working in a fast-paced, iterative development environment. Exceptional problem-solving and analytical skills with a focus on system-wide impact and performance optimization. Preferred Skills Experience with designing domain-specific languages (DSLs) or other advanced automation frameworks. Familiarity with containerized environments (e.g., Docker, Kubernetes) and distributed systems testing. Experience with SaaS-based testing solutions and large-scale data processing systems. Why Join Us Be part of a forward-thinking team developing industry-leading healthcare solutions. Work on challenging projects that directly impact lives. Collaborate with talented individuals in a dynamic and innovative environment. Familiarity with Agile development methodologies and a track record of delivering high-quality software in a fast-paced environment. Familiarity with SaaS and cloud platform tools such as AWS cloudwatch How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview!
Posted 1 month ago
7.0 - 10.0 years
16 - 27 Lacs
Bengaluru
Work from Office
Cloud Monitoring and Compliance Engineer - AM - BLR - J49183 You will have a wide range of responsibilities which will include: Working alongside the content management team to provide visibility of compliance to security guardrails. Customizing and enhancing Cloud Security Posture Management and Cloud Workflow Protection (Microsoft Defender for Cloud features) to meet KPMG specific requirements. Onboarding additional Tenants and cloud hosting providers to the service. Planning and implementation of automated remediation activities. Liaising with vendors to fully realize investment into their products and influence future roadmaps. Day to day management, troubleshooting and housekeeping of the toolsets. Collaborating with other GISG teams to understand their requirements and look for new opportunities for the Service. Ensuring work is completed in such a way to comply with established compliance and other internal control requirements. Using DevOps to record all project tasks. Minimum of 7 years in IT with 4+ years of experience working with a major cloud service provider. Bachelors Degree from an accredited college or university or equivalent work experience. Preferably in Computer Science or related field. Robust technical and implementation knowledge of Cloud Security Posture Management technologies (Microsoft MDC, Twistlock, Redlock). Experienced in securing cloud environments and cloud systems, including topics around certification and compliance. Good understanding of API based security & compliance standards. Understanding of exploits, malware, ransomware, etc. their creation and activation and detection methods. Knowledge of web application architecture and system administration. Experienced building complex custom RQL, KQL or SQL queries. Experienced with Microsoft Azure, AWS or GCP installation, configuration, and administration of security features and services. Programming experience with Python or PowerShell. Qualification - BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MBA,MCA
Posted 1 month ago
11.0 - 16.0 years
11 - 16 Lacs
Gurgaon, Haryana, India
On-site
How You Will Make an Impact/ Job Responsibilities: Design, implement, and maintain highly available, scalable, and reliable infrastructure Working closely with client and delivering outcomes per the SoW and SLA Architect, design, develop, deploy and document new systems or maintain existing systems on Cloud Platforms based on specifications. Collaborate with, provide continuous guidance to, and influence design decisions of various Agile development teams for various existing and new products and solutions w.r.t. Cloud enablement, migration and infrastructure needs, to adopt and promote Cloud best practices, and economies of scale. Engage customers understand their use cases and operational needs, as well as engage with Cloud Platform vendors to understand their technology roadmaps and evolution and translate to comprehensive technology guidance for various product teams. Design for Cloud security and compliance ensure solutions and operations reliability. Drive and conduct audits on current cloud utilization and recommend cloud optimization and security changes. Act as a mentor and technical consultant for the various teams. Desired/Preferred Skills: Responsibilities: Platform Management : Oversee the architecture and management of GCP resources, particularly BigQuery, Databricks, and Kubernetes. Architecting Solutions : Design scalable and efficient data architectures tailored to specific business needs. Optimization of Workloads : Analyze and optimize data processing workloads to improve performance and reduce costs. Performance Measurement : Implement monitoring and performance metrics for data pipelines and applications. Cost Monitoring : Develop strategies for cost-effective resource usage, including budget tracking and optimization techniques. Collaboration : Work closely with data engineers, developers, and stakeholders to ensure seamless integration and functionality. Required Skills: GCP Expertise : In-depth knowledge of Google Cloud Platform services, particularly BigQuery, Databricks, and Kubernetes. Data Engineering : Experience with data modeling, ETL processes, and data warehousing. Performance Tuning : Proven track record in optimizing data processing and query performance in BigQuery. Kubernetes Management : Proficient in deploying and managing containerized applications using Kubernetes. Cost Management : Familiarity with GCP billing and cost management tools to monitor and optimize cloud expenditure. Monitoring Tools : Experience with GCP monitoring and logging tools (e.g., Stackdriver, Cloud Monitoring). Years of Experience: 11 to 16 years experience 7+ years of experience in GCP Solution Architect. Education Qualification: B.E / B.Tech, BCA, MCA equivalent GCP - Professional Cloud Architect
Posted 1 month ago
11.0 - 16.0 years
11 - 16 Lacs
Chennai, Tamil Nadu, India
On-site
How You Will Make an Impact/ Job Responsibilities: Design, implement, and maintain highly available, scalable, and reliable infrastructure Working closely with client and delivering outcomes per the SoW and SLA Architect, design, develop, deploy and document new systems or maintain existing systems on Cloud Platforms based on specifications. Collaborate with, provide continuous guidance to, and influence design decisions of various Agile development teams for various existing and new products and solutions w.r.t. Cloud enablement, migration and infrastructure needs, to adopt and promote Cloud best practices, and economies of scale. Engage customers understand their use cases and operational needs, as well as engage with Cloud Platform vendors to understand their technology roadmaps and evolution and translate to comprehensive technology guidance for various product teams. Design for Cloud security and compliance ensure solutions and operations reliability. Drive and conduct audits on current cloud utilization and recommend cloud optimization and security changes. Act as a mentor and technical consultant for the various teams. Desired/Preferred Skills: Responsibilities: Platform Management : Oversee the architecture and management of GCP resources, particularly BigQuery, Databricks, and Kubernetes. Architecting Solutions : Design scalable and efficient data architectures tailored to specific business needs. Optimization of Workloads : Analyze and optimize data processing workloads to improve performance and reduce costs. Performance Measurement : Implement monitoring and performance metrics for data pipelines and applications. Cost Monitoring : Develop strategies for cost-effective resource usage, including budget tracking and optimization techniques. Collaboration : Work closely with data engineers, developers, and stakeholders to ensure seamless integration and functionality. Required Skills: GCP Expertise : In-depth knowledge of Google Cloud Platform services, particularly BigQuery, Databricks, and Kubernetes. Data Engineering : Experience with data modeling, ETL processes, and data warehousing. Performance Tuning : Proven track record in optimizing data processing and query performance in BigQuery. Kubernetes Management : Proficient in deploying and managing containerized applications using Kubernetes. Cost Management : Familiarity with GCP billing and cost management tools to monitor and optimize cloud expenditure. Monitoring Tools : Experience with GCP monitoring and logging tools (e.g., Stackdriver, Cloud Monitoring). Years of Experience: 11 to 16 years experience 7+ years of experience in GCP Solution Architect. Education Qualification: B.E / B.Tech, BCA, MCA equivalent GCP - Professional Cloud Architect
Posted 1 month ago
11.0 - 16.0 years
11 - 16 Lacs
Mumbai, Maharashtra, India
On-site
How You Will Make an Impact/ Job Responsibilities: Design, implement, and maintain highly available, scalable, and reliable infrastructure Working closely with client and delivering outcomes per the SoW and SLA Architect, design, develop, deploy and document new systems or maintain existing systems on Cloud Platforms based on specifications. Collaborate with, provide continuous guidance to, and influence design decisions of various Agile development teams for various existing and new products and solutions w.r.t. Cloud enablement, migration and infrastructure needs, to adopt and promote Cloud best practices, and economies of scale. Engage customers understand their use cases and operational needs, as well as engage with Cloud Platform vendors to understand their technology roadmaps and evolution and translate to comprehensive technology guidance for various product teams. Design for Cloud security and compliance ensure solutions and operations reliability. Drive and conduct audits on current cloud utilization and recommend cloud optimization and security changes. Act as a mentor and technical consultant for the various teams. Desired/Preferred Skills: Responsibilities: Platform Management : Oversee the architecture and management of GCP resources, particularly BigQuery, Databricks, and Kubernetes. Architecting Solutions : Design scalable and efficient data architectures tailored to specific business needs. Optimization of Workloads : Analyze and optimize data processing workloads to improve performance and reduce costs. Performance Measurement : Implement monitoring and performance metrics for data pipelines and applications. Cost Monitoring : Develop strategies for cost-effective resource usage, including budget tracking and optimization techniques. Collaboration : Work closely with data engineers, developers, and stakeholders to ensure seamless integration and functionality. Required Skills: GCP Expertise : In-depth knowledge of Google Cloud Platform services, particularly BigQuery, Databricks, and Kubernetes. Data Engineering : Experience with data modeling, ETL processes, and data warehousing. Performance Tuning : Proven track record in optimizing data processing and query performance in BigQuery. Kubernetes Management : Proficient in deploying and managing containerized applications using Kubernetes. Cost Management : Familiarity with GCP billing and cost management tools to monitor and optimize cloud expenditure. Monitoring Tools : Experience with GCP monitoring and logging tools (e.g., Stackdriver, Cloud Monitoring). Years of Experience: 11 to 16 years experience 7+ years of experience in GCP Solution Architect. Education Qualification: B.E / B.Tech, BCA, MCA equivalent GCP - Professional Cloud Architect
Posted 1 month ago
4.0 - 9.0 years
4 - 9 Lacs
Bengaluru, Karnataka, India
On-site
Maintain existing Azure infrastructure stable and enable to create an automated monitoring system and administration tools including: Automating VM and other service provisioning and other tasks in the cloud Setup auto-scaling for applications located in the cloud Building one-off processes to manage infrastructure and storage Monitoring automated backup and restore of cloud services Securing and hardening VMs and networks Monitor cloud infra and effectively utilize, optimize and support Maintenance and support including regular upgrades and housekeeping activities Build effective infrastructure redundancy and failover processes Identify and implement opportunities to automate and optimize the efforts, where feasible Participate in infrastructure security reviews, access control reviews, and audit reviews to
Posted 1 month ago
7.0 - 12.0 years
3 - 7 Lacs
Pune
Work from Office
Key responsibility Monitor cloud infrastructure Ensure the availability, performance , and security of applications. Identify and resolve the issues - Identify and resolve performance related issues like performance bottlenecks Implement monitoring - configure and maintain cloud monitoring via various tools such as cloud watch. Develop dashboard and reports Create various dashboard to monitor the systems and create customized reports. Optimize cloud resource Find opportunity to maximize resource utilization , cost saving and performance improvement. Key skills Cloud experience Experience with cloud platform like AWS. Monitoring tools knowledge on monitoring tools such as cloud watch . Troubleshooting Strong trouble shooting skills to identify and resolve cloud related issues. Communication skills Excellent communication skills. Certification - Preferred in related field.
Posted 1 month ago
8.0 - 13.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Expertise in development using Core Java, J2EE,Spring Boot, Microservices, Web Services SOA experience SOAP as well as Restful with JSON formats, with Messaging Kafka. Working proficiency in enterprise developmental toolsets like Jenkins, Git/ Bitbucket, Sonar, Black Duck, Splunk, Apigee etc. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Good understanding of React JS ,Photon framework , Design, Kubernetes Working with GIT/Bitbucket, Maven, Gradle, Jenkins tools to build and deploy code deployment to production environments.
Posted 1 month ago
12.0 - 17.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Expertise in development using Core Java, J2EE,Spring Boot, Microservices, Web Services SOA experience SOAP as well as Restful with JSON formats, with Messaging Kafka. Working proficiency in enterprise developmental toolsets like Jenkins, Git/ Bitbucket, Sonar, Black Duck, Splunk, Apigee etc. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Good understanding of React JS ,Photon framework , Design, Kubernetes Working with GIT/Bitbucket, Maven, Gradle, Jenkins tools to build and deploy code deployment to production environments.
Posted 1 month ago
9.0 - 14.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Expertise in development using Core Java, J2EE,Spring Boot, Microservices, Web Services SOA experience SOAP as well as Restful with JSON formats, with Messaging Kafka. Working proficiency in enterprise developmental toolsets like Jenkins, Git/ Bitbucket, Sonar, Black Duck, Splunk, Apigee etc. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Good understanding of React JS ,Photon framework , Design, Kubernetes Working with GIT/Bitbucket, Maven, Gradle, Jenkins tools to build and deploy code deployment to production environments.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description: We are seeking a skilled and proactive GCP Cloud Engineer with 3 5 years of hands on experience in managing and optimizing cloud infrastructure using Google Cloud Platform GCP The ideal candidate will be responsible for designing deploying and maintaining secure and scalable cloud environments collaborating with cross functional teams and driving automation and reliability across our cloud infrastructure Key Responsibilities: Design and implement cloud native solutions on Google Cloud Platform Deploy and manage infrastructure using Terraform Cloud Deployment Manager or similar IaC tools Manage GCP services such as Compute Engine GKE Kubernetes Cloud Storage Pub Sub Cloud Functions BigQuery etc Optimize cloud performance cost and scalability Ensure security best practices and compliance across the GCP environment Monitor and troubleshoot issues using Stackdriver Cloud Monitoring Collaborate with development DevOps and security teams Automate workflows CI CD pipelines using tools like Jenkins GitLab CI or Cloud Build Technical Requirements: 3 5 years of hands on experience with GCP Strong expertise in Terraform GCP networking and cloud security Proficient in container orchestration using Kubernetes GKE Experience with CI CD DevOps practices and shell scripting or Python Good understanding of IAM VPC firewall rules and service accounts Familiarity with monitoring logging tools like Stackdriver or Prometheus Strong problem solving and troubleshooting skills Additional Responsibilities: GCP Professional certification e g Professional Cloud Architect Cloud Engineer Experience with hybrid cloud or multi cloud architecture Exposure to other cloud platforms AWS Azure is a plus Strong communication and teamwork skills Preferred Skills: Cloud Platform ->Google Cloud Platform Developer->GCP/ Google Cloud,Java,Java->Springboot,.Net,Python
Posted 1 month ago
5.0 - 8.0 years
9 - 14 Lacs
Hyderabad
Work from Office
.Net Fullstack (Angular/React) + Azure Over 6+ years of experienceTechnology.NET/C# experience, cloud Azureworked in app modernization projects.Design develop, and maintain scalable and secure full-stack applications leveraging cloud-native technologie and services Implement frontend interfaces using modern frameworks and libraries, ensuring responsive and intuitive user experiences Develop robust backend services and APIs, integrating cloud data storage and processing solutions for high performance and reliability.Automate infrastructure provisioning and application deployment using infrastructure as code (IaC) and continuous integration/continuous deployment (CI/CD) pipelines Ensure application and data security by implementing best practices in authentication, authorization, encryption, and compliance standards Optimize application performance across both the frontend and backend, utilizing cloud monitoring and optimization tools.Collaborate with cross-functional teams to define requirements, design solutions, and deliver features that meet user and business needs Mandatory Skills: FullStack Microsoft-Cloud Azure. Experience5-8 Years.
Posted 1 month ago
8.0 - 10.0 years
7 - 11 Lacs
Pune
Work from Office
Contract Job TitleIncident & Operations Manager (DevOps Background) Responsibilities: 1. Incident Response & Triage Develop and maintain incident response plans for various types of incidents. Ensure timely resolution of tickets by Operations team within agreed SLAs. Assess and prioritize incidents based on severity and initiate response actions accordingly. 2. Incident Documentation & Analysis Maintain detailed incident records including impact, nature, and resolution. Perform post-incident analysis to identify root causes and preventive actions. 3. Customer Satisfaction & Reporting Monitor and improve customer satisfaction through efficient operations. Address customer complaints and conduct monthly SLA review meetings. 4. DevOps Background (Prior Hands-on Experience) Experience with Linux administration, shell scripting, and troubleshooting. Managed CI/CD pipelines, Kubernetes clusters, and cloud infrastructure. Knowledge of virtualization (VMware/KVM) and private cloud environments. Familiarity with tools like Docker, Jenkins, Ansible, and cloud monitoring. 5. Leadership & Coordination Lead cross-functional teams and bridge gap between DevOps and Operations. Ensure clear communication with stakeholders and technical teams. - Location Pune, Maharashtra, India - 24/7 Operations (Yes/No)Yes - General Shift (Yes/No) As per the client's requirement - All Shift Timings: As per roster Rates including mark up - 130K/M Do RESPONSIBILITIES Ensure each and every Change is recorded and approved before implementation. Ensure Change are categorized and are approved as per the defined Process based on the Change category Standard, Normal, Expedited, Emergency Convene and chair CAB meetings, circulate the MOM of the CABs Ensure no Unauthorized change is implemented which may potentially impact the Production Ensure Periodical audits in place as per Wipro and Customer process and the close the Audit-gaps in the agreed timelines. Report the Management on the agreed Change KPIs and ensure effective change communication in place Ensure Change implementation done as per the Implementation Plan with no manual errors through setting up 4-eye review for each of the Changes Ensures each of the Change is assigned with the Risk involved and ensure Wipro and Customer Processes are followed in case of High-risk / High-impact Changes Conduct Post Implementation Reviews and validate the change status against the defined Change success criterion Bring in Service improvements to improve the overall Process maturity KEY S AND COMPETENCIES 8-10 years of ITSM experience in Change and other processes ITIL V3 / 2011 Foundation or Intermediate certification Capable to collaborate with Multiple Technical towers, face the Customer, coordinate with the Vendors Effective Communication skills
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough