Jobs
Interviews

1098 Monitoring Tools Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 - 20.0 years

50 - 60 Lacs

Bahraich, Shrawasti

Work from Office

ABOUT THE HANS FOUNDATION . Over its 15 years of existence, THF has reached more than 35 million beneficiaries through its programs. THF works with communities through direct implementation of projects on the ground in addition to providing local management and monitoring support to not-for-profit organisations in India funded through THF USA and RIST. GENERAL Job Title: Tutors Location of Job : Bahraich / Shravasti No. of Positions : 1 Job type : Part-timers, on 1 year of consultant contract basis Department: Programme Project : Hans Education Programme-UP Reporting to : LSE Mentors/Project Coordinator Position Overview: The Tutor will provide academic support to students both within school hours and outside school settings through Community-Based Learning Centres (CBLs). The role focuses on implementing remedial education programs, fostering academic improvement, and addressing educational gaps. The incumbent will work closely with students, parents, and community stakeholders to promote retention in schools and enhance academic outcomes. Key Responsibilities A. Academic Support Within Schools Targeted Remedial Learning: Conduct remedial classes focusing on Science (Mathematics, Physics, Chemistry) and Language skills (English). Design and implement personalized learning plans for students based on their academic needs. Assist students with test preparation, including reviewing content, administering practice tests, and teaching study strategies. Confidence Building: Develop and facilitate activities aimed at enhancing students confidence and academic performance. Provide consistent and constructive feedback to foster motivation and engagement. Classroom Support: Collaborate with schoolteachers to align remedial teaching strategies with regular classroom instruction. Monitor and assess students academic progress during school hours to ensure learning objectives are met. B. Community-Based Learning Centres (CBLs) Remedial Education: Deliver targeted remedial classes to address academic gaps in Mathematics, Physics, Chemistry, and English. Conduct spoken English and communication skill sessions to enhance students oral and verbal abilities. Incorporate career preparation and life skills training into education sessions to support holistic development. Community Engagement Activities: Conduct door-to-door surveys to identify and enroll out-of-school children. Build community awareness by organizing sensitization sessions with parents to emphasize the importance of education. Actively engage with the community to ensure every out-of-school child is enrolled in suitable educational programs. Empowering Students: Provide personalized guidance and mentorship to support students academic and personal growth. Collaborate with mentors to ensure all enrolled students receive comprehensive support. C. Monitoring and Reporting Progress Tracking: Maintain accurate academic records for each student, documenting their progress and challenges. Use monitoring tools to assess the quality of remedial education sessions and identify areas for improvement. Reporting: Submit attendance records for students in remedial classes and CBLs. Prepare and share reports on home visits, parental meetings, and CBL activities with relevant stakeholders. Quality Assurance: Collaborate with mentors and coordinators to ensure adherence to program quality standards. Participate in regular evaluations and feedback sessions to improve program delivery. Qualifications Bachelor s degree in Education, Science, English, Social Work, or a related field. Master s degree in relevant subject matter will be preferred. Experience: 1-2 years of teaching experience, preferably in remedial education or community engagement. Prior experience working with schoolchildren, particularly in underserved communities. Skills: Proficiency in Mathematics, Science, and English. Strong communication and interpersonal skills, especially in mentoring students. Familiarity with MS Word and Excel for maintaining academic records and preparing reports. Ability to conduct community outreach and build relationships with diverse stakeholders.

Posted 1 month ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Bengaluru

Work from Office

The principal objective of this position is to provide expert level support for the APAC & EMEA Region on Virtualization environment. Provide 1st, 2nd and 3rd line support ensuring full availability of the environment during production hours. The engineer will play an active role on all Safety & Trust initiatives as well as day-to-day support activities, administration and all projects associated with virtualized infrastructures and their associated applications. By leveraging cutting-edge technologies such as Kubernetes, Ansible, and Python, the team plays a pivotal role in enabling seamless virtualization services that are essential for our internal operations. The team's work directly impacts the efficiency and effectiveness of our cloud-based solutions, ensuring that our banking and financial services remain at the forefront of technological innovation. Responsibilities Direct Responsibilities Maintain top-notch technical skills and business knowledge through proactive self-learning in addition to any formal instructor-led training provided by BNP Paribas. Incident Management: Review and investigate open incidents to identify root causes. Diagnose issues stemming from software bugs, Kubernetes environment problems, or virtualization infrastructure issues. Document incident resolutions and create detailed resolution sheets. Development and Maintenance: Develop new features and functionalities for REST microservices. Improve and optimize existing features to enhance performance and reliability. Ensure the smooth operation of virtualization APIs that enable the creation, modification, and destruction of virtual resources. Documentation: Thoroughly document all developments, including code, design decisions. Maintain up-to-date documentation for incident resolutions and best practices. Collaboration and Communication: Work closely with the sub-team of developers and the broader virtualization administration team. Participate in daily meetings to provide updates on progress, discuss challenges, and plan next steps. Foster a collaborative environment by actively listening, being available, and showing respect to team members. Continuous Improvement: Stay updated with the latest trends and technologies in Python, Kubernetes, containerization, and virtualization. Contribute to the continuous improvement of our virtualization services and infrastructure. These responsibilities are essential for maintaining the high standards of our private cloud environment and ensuring that our virtualization services meet the demands of our global operations. Contributing Responsibilities Degree in Computer Science, Information Technology, or a related field. 5-10 years of experience in a development role, with a focus on Python and virtualization technologies. Experience working in the financial services industry is a plus. improve IT standards and policies. Proficient in English. Good interpersonal and communication skills. Technical & Behavioral Competencies Required Technical Knowledge/Skills: At least 5+ years of technical experience in as many of these Tools: - Programming: Basic knowledge of Python. - Containerization: Basic understanding of Kubernetes and containerization technologies. - Automation: Basic skills in Ansible. - Kubernetes: Manage kubernestes resources like pods, service, deployment and other - Cloud Services: Basic understanding of REST microservices and their role in cloud infrastructure. - Monitoring Tools: Familiarity with system performance and monitoring tools - Virtualization: Familiarity with virtualization concepts and technologies knowledge on governance, server build and administration of IT services in cloud-based environments. Develop and maintain system documentation, including configuration guides, and standard operating procedures. Direct and be responsible for the implementation effort. Provide technical guidance and mentorship to team members. Assess demand for their service or technology area and develop plans to meet future capacity needs and makes recommendations to the manager. Aware of all critical changes to infrastructure and applications that could impact service delivery to their business customers. Able to work autonomously and as part of a team using strong analytical skills. Be service oriented, customer focused, positive, committed and have an enthusiastic can do attitude. Demonstrate a systematic and logical approach to problem-solving. Able to follow the banks standards, processes, and procedures. Escalating incidents internally or to 3rd party partners when required. ITIL 4 Foundation qualification. Requirements [only in bullet points, technical & leadership & soft skills, nice to haves etc. on the same place, no splits]. Example: English (mandatory) Expertise on high availability environments. Devops methodologies Cloud experience Eager to learn. Analytical mind-set Ability to work well under pressure, ability to work autonomous and in team environment. Good interpersonal and communication skills. Experience of delivering strategic priorities within strict timelines. Experience on production environment and support Specific Qualifications (if required) CKAD (Certified Kubernetes Application Developer) Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to collaborate / Teamwork Communication skills - oral & written Client focused Ability to deliver / Results driven Transversal Skills: Ability to understand, explain and support change Ability to develop and adapt a process Ability to manage a project Analytical Ability Ability to develop others & improve their skills Education Level: Bachelor Degree or equivalent Experience Level At least 15 years Other/Specific Qualifications (if required) Participating in standby 7x24 (1 week per month). Evaluate emerging system technologies and align their benefits with the business strategy of BNPP. -

Posted 1 month ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Mumbai

Work from Office

Responsible for L2 support activities for an applications which are used for Global payment solution applications. This is an extended team which works along with the team located in Paris. Shift working to support application which is implemented globally. Shift timings 07:00AM-03:30PM/09:30AM - 06:00PM/01:30PM - 10:00 PM. Also required to provide on call support during weekends or weekdays on rotation basis. Flexibility to support the application on Mumbai Bank Holidays on rotation basis. Responsibilities Direct Responsibilities L2 Production support activity using Unix, SQL and Dynatrace. Understanding of Application architecture. Proactive monitoring using tools such as Dynatrace and Splunk Deployments of the application on Production environment. Daily health check reporting and Active Monitoring. Knowledge on monitoring tools such as Autosys will be an added advantage. Mandatory shell scripting Develop APS jobs on Ansible Tower Implementation of improvements to prevent incidents and maintain accurate documentation Effective problem and change management Automation of tasks and ongoing continuous improvement Contributing Responsibilities Responsible for Incident/Change/Problem Management. Responsible to drive meetings for Support related Activities. Technical & Behavioral Competencies Mandatory Skills: Linux - Certified Ansible Very good level required. Nice to have certification. Oracle, SQL Managing Java application Good to Have: Autosys/MQ Kubernetes Devops/Service now/Dynatrace Strong written and verbal communication skills Good Knowledge on Unix and Oracle, PL/SQL ITIL Process knowledge Ability to work in shifts and flexible hours on holidays and weekends in exigency situations Self-motivated, with strong ability to work both independently and with the team Strong Analytical skills Preferred to have ITIL Certification Prior Knowledge on Application Production Support Knowledge on Payment and Finance domain applications. Specific Qualifications (if required) Graduate in any discipline or Masters in Information Technology Overall 3 5 Years of IT experience of which 3 years minimum should be on Application Production Support in banking Domain Skills Referential Behavioural Skills : Ability to collaborate / Teamwork Ability to deliver / Results driven Creativity & Innovation / Problem solving Choose an item. Transversal Skills: Ability to understand, explain and support change Analytical Ability Ability to develop and adapt a process Ability to set up relevant performance indicators Ability to manage / facilitate a meeting, seminar, committee, training Education Level: Bachelor Degree or equivalent Experience Level At least 3 years Other/Specific Qualifications (if required) -

Posted 1 month ago

Apply

8.0 - 14.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Assume a vital position as a key member of a high-performing team that delivers infrastructure and performance excellence. Your role will be instrumental in shaping the future at one of the worlds largest and most influential companies. As a Lead Infrastructure Engineer at JPMorgan Chase within the Consumer and Community Banking , you apply deep knowledge of software, applications, and technical processes within the infrastructure engineering discipline. Continue to evolve your technical and cross-functional knowledge outside of your aligned domain of expertise. Job responsibilities Applies technical expertise and problem-solving methodologies to projects of moderate scope Drives a workstream or project consisting of one or more infrastructure engineering technologies Works with other platforms to architect and implement changes required to resolve issues and modernize the organization and its technology processes Executes creative solutions for the design, development, and technical troubleshooting for problems of moderate complexity Strongly considers upstream/downstream data and systems or technical implications and advises on mitigation actions Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on infrastructure disciplines and 5+ years applied experience Deep knowledge of one or more areas of infrastructure engineering such as hardware, networking terminology, databases, storage engineering, deployment practices, integration, automation, scaling, resilience, or performance assessments Deep knowledge of one specific infrastructure technology and scripting languages (e. g. , Scripting, Python, etc. ) Drives to continue to develop technical and cross-functional knowledge outside of the product Deep knowledge of cloud infrastructure and multiple cloud technologies with the ability to operate in and migrate across public and private clouds Experience with at least one of the following cloud platforms AWS, Cloud Foundry, or Kubernetes for deployment. Proficiency in leveraging any programming language (Python or Java) for automation and data preprocessing techniques, including data pipelines, normalization, and streaming to data stores. Experience with at least one of the following log aggregation tools Splunk, DataDog, Cloud Stream, or ELK Stack Knowledge of Active Directory Federated Services solutions OIDC, SAML, or x509. Preferred qualifications, capabilities, and skills AWS cloud native application deployment including infrastructure as code using Terraform, Python or Java for Lambda and automation. Robust understanding of AWS infrastructure tools and resources such as EC2, Lambda, EKS/ECS, databases, and data warehouse Understanding of 3rd party vendor software installation, configuration, and integration. Familiarity with monitoring tools (Dynatrace, Splunk) Certifications in Kubernetes or AWS. Assume a vital position as a key member of a high-performing team that delivers infrastructure and performance excellence. Your role will be instrumental in shaping the future at one of the worlds largest and most influential companies. As a Lead Infrastructure Engineer at JPMorgan Chase within the Consumer and Community Banking , you apply deep knowledge of software, applications, and technical processes within the infrastructure engineering discipline. Continue to evolve your technical and cross-functional knowledge outside of your aligned domain of expertise. Job responsibilities Applies technical expertise and problem-solving methodologies to projects of moderate scope Drives a workstream or project consisting of one or more infrastructure engineering technologies Works with other platforms to architect and implement changes required to resolve issues and modernize the organization and its technology processes Executes creative solutions for the design, development, and technical troubleshooting for problems of moderate complexity Strongly considers upstream/downstream data and systems or technical implications and advises on mitigation actions Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on infrastructure disciplines and 5+ years applied experience Deep knowledge of one or more areas of infrastructure engineering such as hardware, networking terminology, databases, storage engineering, deployment practices, integration, automation, scaling, resilience, or performance assessments Deep knowledge of one specific infrastructure technology and scripting languages (e. g. , Scripting, Python, etc. ) Drives to continue to develop technical and cross-functional knowledge outside of the product Deep knowledge of cloud infrastructure and multiple cloud technologies with the ability to operate in and migrate across public and private clouds Experience with at least one of the following cloud platforms AWS, Cloud Foundry, or Kubernetes for deployment. Proficiency in leveraging any programming language (Python or Java) for automation and data preprocessing techniques, including data pipelines, normalization, and streaming to data stores. Experience with at least one of the following log aggregation tools Splunk, DataDog, Cloud Stream, or ELK Stack Knowledge of Active Directory Federated Services solutions OIDC, SAML, or x509. Preferred qualifications, capabilities, and skills AWS cloud native application deployment including infrastructure as code using Terraform, Python or Java for Lambda and automation. Robust understanding of AWS infrastructure tools and resources such as EC2, Lambda, EKS/ECS, databases, and data warehouse Understanding of 3rd party vendor software installation, configuration, and integration. Familiarity with monitoring tools (Dynatrace, Splunk) Certifications in Kubernetes or AWS.

Posted 1 month ago

Apply

9.0 - 13.0 years

13 - 14 Lacs

Bengaluru

Work from Office

Strong hands-on experience with AWS services (EC2, VPC, RDS, S3, IAM, Lambda, etc.) Proficiency in CloudFormation Experience with Linux , scripting (Bash/Python), and monitoring tools Knowledge of Docker and container orchestration

Posted 1 month ago

Apply

6.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

">MLOps Engineer 6-10 Years Bengaluru ML About the Role We are seeking a highly experienced and innovative Senior Machine Learning Engineer to join our AI/ML team. In this role, you will lead the design, development, deployment, and monitoring of scalable machine learning solutions using GCP Vertex AI , MLflow , and other modern ML tools. You ll work closely with data scientists, engineers, and product teams to bring intelligent systems into production that drive real business impact. Key Responsibilities Design, develop, and deploy end-to-end machine learning models in production environments using GCP Vertex AI Manage the full ML lifecycle including data preprocessing, model training, evaluation, deployment, and monitoring Implement and maintain MLflow pipelines for experiment tracking, model versioning, and reproducibility Collaborate with cross-functional teams to understand business requirements and translate them into ML solutions Optimize model performance and scalability using best practices in MLOps and cloud-native architecture Develop reusable components and frameworks to accelerate ML development and deployment Monitor deployed models for drift, performance degradation, and retraining needs Ensure compliance with data governance, security, and privacy standards Required Skills & Qualifications 6+ years of experience in machine learning engineering or applied data science Strong proficiency in Python , SQL , and ML libraries such as scikit-learn , TensorFlow , or PyTorch Hands-on experience with GCP Vertex AI for model training, deployment, and pipeline orchestration Deep understanding of MLflow for experiment tracking, model registry, and lifecycle management Solid grasp of MLOps principles and tools (e.g., CI/CD for ML, Docker, Kubernetes) Experience with cloud data platforms (e.g., BigQuery, Cloud Storage) and distributed computing Strong problem-solving skills and ability to work independently in a fast-paced environment Excellent communication skills and ability to explain complex ML concepts to non-technical stakeholders Preferred Qualifications Experience with other cloud platforms (AWS SageMaker, Azure ML) is a plus Familiarity with feature stores, model monitoring tools, and data versioning systems Contributions to open-source ML projects or publications in ML conferences

Posted 1 month ago

Apply

3.0 - 8.0 years

7 - 17 Lacs

Coimbatore

Remote

Role Overview As an AWS DevOps Engineer, youll own the end-to-end infrastructure lifecyclefrom design and provisioning through deployment, monitoring, and optimization. Youll collaborate closely with development teams to implement Infrastructure as Code, build robust CI/CD pipelines, enforce security and compliance guardrails, and integrate next-gen tools like Google Gemini for automated code-quality and security checks. Summary DevOps Engineer with 3+ years of experience in AWS infrastructure, CI/CD, and IaC, capable of designing secure, production-grade systems with zero-downtime deployments. The ideal candidate excels in automation, observability, and compliance within a collaborative engineering environment. Top Preferred Technologies: Terraform – core IaC tool for modular infrastructure design Amazon ECS/EKS (Fargate) – container orchestration and deployment GitHub Actions / AWS CodePipeline + CodeBuild – modern CI/CD pipelines Amazon CloudWatch – observability, custom metrics, and centralized logging IAM, KMS & GuardDuty – for access control, encryption, and threat detection SSM Parameter Store – for secure config and secret management Python / Bash / Node.js – for scripting, automation, and Lambda integration Key Responsibilities Infrastructure as Code (IaC): Design, build, and maintain Terraform (or CloudFormation) modules for VPCs, ECS/EKS clusters, RDS, ElastiCache, S3, IAM, KMS, and networking across multiple Availability Zones. Produce clear architecture diagrams (Mermaid or draw.io) and documentation. CI/CD Pipeline Development: Implement GitHub Actions or AWS CodePipeline/CodeBuild workflows to run linting, unit tests, Terraform validation, Docker builds, and automated deployments (zero-downtime rolling updates) to ECS/EKS. Integrate unit tests (Jest, pytest) and configuration-driven services (SSM Parameter Store). Monitoring & Alerting: Define custom CloudWatch metrics (latency, error rates), create dashboards, and centralize application logs in CloudWatch Logs with structured outputs and PII filtration. Implement CloudWatch Alarms with SNS notifications for key thresholds (CPU, replica lag, 5xx errors). Security & Compliance: Enable and configure GuardDuty and AWS Config rules (e.g., public-CIDR security groups, unencrypted S3 or RDS). Enforce least-privilege IAM policies, key-management with KMS, and secure secret storage in SSM Parameter Store. Innovative Tooling Integration: Integrate Google Gemini (or similar) into the CI pipeline for automated Terraform security scans and generation of actionable “security reports” as PR comments. Documentation & Collaboration: Maintain clear README files, module documentation, and step-by-step deployment guides. Participate in code reviews, design discussions, and post-mortems to continuously improve our DevOps practices. Required Qualifications Experience: 3+ years in AWS DevOps or Site Reliability Engineering roles, designing and operating production-grade cloud infrastructure. Technical Skills: Terraform (preferred) or CloudFormation for IaC. Container orchestration: ECS/Fargate or EKS with zero-downtime deployments. CI/CD: GitHub Actions, AWS CodePipeline, and CodeBuild (linting, testing, Docker, Terraform). Monitoring: CloudWatch Dashboards, custom metrics, log centralization, and alarm configurations. Security & Compliance: IAM policy design, KMS, GuardDuty, AWS Config, SSM Parameter Store. Scripting: Python, Bash, or Node.js for automation and Lambda functions. Soft Skills: Strong problem-solving mindset and attention to detail. Excellent written and verbal communication for documentation and cross-team collaboration. Ability to own projects end-to-end and deliver under tight timelines. Wil have to attend Coimbatore office on request (Hybrid) Preferred Qualifications Hands-on experience integrating third-party security or code-analysis APIs (e.g., Google Gemini, Prisma Cloud). Familiarity with monitoring and observability best practices, including custom metric creation. Exposure to multi-cloud environments or hybrid cloud architectures. Certification: AWS Certified DevOps Engineer – Professional or AWS Certified Solutions Architect – Associate.

Posted 1 month ago

Apply

2.0 - 5.0 years

5 - 11 Lacs

Chennai

Hybrid

Job Title : Desktop Engineer (L1/L2 Support) Work Location : Chennai & Bangalore Experience : 2+ Years Shift : Night Shift (Mandatory) Work Schedule : Rotational Week Off Interview Process : L1: Virtual Technical Interview L2: Face-to-Face Technical Interview Job Description : We are seeking a skilled and proactive Desktop Engineer with over 2 years of experience to join our IT infrastructure support team. The ideal candidate will have strong experience in system monitoring, incident handling, and ticket management with exposure to Linux/Unix environments and enterprise tools like Autosys and ServiceNow. Key Responsibilities : Provide Level 1 and Level 2 desktop support for end-users during night shifts. Monitor batch jobs and system health using Autosys and other enterprise monitoring tools . Handle incident and change management processes in compliance with ITIL practices. Troubleshoot and resolve issues in Linux/Unix environments (mandatory). Perform basic SQL queries for system diagnostics and reporting. Create, update, and close tickets using ServiceNow or similar ticketing systems. Escalate unresolved issues to higher support levels while ensuring proper documentation. Participate in shift handovers and maintain knowledge base for recurring issues. Required Skills : 2+ years of experience in Desktop Support or IT Infrastructure Support roles Strong hands-on experience with Autosys job scheduling and monitoring Proficiency in at least one Monitoring Tool (e.g., Nagios, Zabbix, SolarWinds, etc.) Unix/Linux troubleshooting skills ( mandatory ) Working knowledge of SQL for basic data retrieval and troubleshooting Experience with ServiceNow or similar ITSM ticketing tools Knowledge of Incident Management and Change Management processes Strong communication and coordination skills to support global teams Nice to Have : ITIL Foundation Certification Exposure to scripting (Shell, Python) for automation Experience working in a 24x7 environment Work Environment : Night shift (Mandatory) Rotational week offs Dynamic team with growth opportunities in IT Infrastructure and Monitoring If anyone interested please share your updated resume Below email muthukrishnan.saminathan@kiya.ai 6369929072

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

udaipur, rajasthan

On-site

As a Backend Engineer (SDE-II) at YoCharge, a leading Electric Vehicle Charging & Energy Management SaaS startup, you will play a crucial role in scaling YoCharge's back-end platform and services to facilitate smart charging for electric vehicles. Based in Udaipur, this full-time on-site position offers an exciting opportunity to contribute to the advancement of the EV and Energy domain on a global scale. With 3-5 years of backend development experience using Python, Django, and FastAPI, you will leverage your expertise in WebSockets, async programming, and real-time APIs to enhance the efficiency and effectiveness of YoCharge's operations. Experience in scaling high-traffic distributed systems and familiarity with OCPP protocols and EV charging infrastructure will be beneficial. Your proficiency in SQL & NoSQL databases, such as PostgreSQL, Redis, and time-series databases, will be essential for optimizing performance, while your knowledge of DevOps practices, CI/CD pipelines, containerization (Docker, Kubernetes), and cloud services (AWS/Azure/GCP) will ensure seamless operations. Additionally, experience with monitoring and logging tools like Prometheus, Grafana, and ELK Stack will be advantageous. If you have prior exposure to IoT, energy management systems, or smart grid technologies, it will be considered a valuable asset. A Bachelor's degree in Computer Science or a related field is required, along with excellent communication skills, strong teamwork abilities, and the capacity to thrive in a dynamic and fast-paced environment while meeting deadlines. If you have a passion for Electric Vehicles & Energy, enjoy building innovative products, thrive in startup environments, and have experience in developing solutions at scale, you are the ideal candidate for this role at YoCharge. In return, you will have the opportunity to work on cutting-edge EV and clean energy solutions that are shaping the future of mobility, tackle real-world scalability and ML challenges, and collaborate with a diverse team of engineers, data scientists, and industry experts. Furthermore, competitive salary, performance-driven incentives, and ample growth opportunities await you as part of the YoCharge team.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

karnataka

On-site

Job Description: As an Engineering Manager, you will lead a high-performing team of 8-12 engineers and engineering leads in the end-to-end delivery of software applications through sophisticated CI/CD pipelines. Your role involves mentoring engineers to build scalable, resilient, and robust cloud-based solutions for Walmart's suite of products, contributing to quality and agility. Within Enterprise Business Services, the Risk Tech/Financial Services Compliance team focuses on designing, developing, and operating large-scale data systems and real-time applications. The team works on creating pipelines, aggregating data on Google Cloud Platform, and collaborating with various teams to provide technical solutions. Key Responsibilities: - Manage a team of engineers and engineering leads across multiple technology stacks, including Java, NodeJS, and Spark with Scala on GCP. - Drive design, development, and documentation processes. - Establish best engineering and operational practices based on product and scrum metrics. - Interact with Walmart engineering teams globally, contribute to the tech community, and collaborate with product and business stakeholders. - Work with senior leadership to plan the future roadmap of products, participate in hiring and mentoring, and lead technical vision and roadmap development. - Prioritize feature development aligned with strategic objectives, establish clear expectations with team members, and engage in organizational events. - Collaborate with business owners and technical teams globally, and develop mid-term technical vision and roadmap to meet future requirements. Qualifications: - Bachelor's/Master's degree in Computer Science or related field with a minimum of 12+ years of software development experience and at least 5+ years of managing engineering teams. - Experience in managing agile technology teams, building Java, Scala-Spark backend systems, and working in cloud-based solutions. - Proficiency in JavaScript, NodeJS, ReactJS, NextJS, CS Fundamentals, Microservices, Data Structures, and Algorithms. - Strong skills in CI/CD development environments/tools, writing modular and testable code, microservices architecture, and working with relational and NoSQL databases. - Hands-on experience with technologies like Spring Boot, concurrency, RESTful services, and cloud platforms such as Azure, GCP. - Knowledge of containerization tools like Docker, Kubernetes, and monitoring/alert tools like Prometheus, Splunk. - Ability to lead a team, contribute to technical design, and collaborate across geographies. About Walmart Global Tech: Walmart Global Tech is a team of software engineers, data scientists, and service professionals at the forefront of retail disruption. We innovate to impact millions and reimagine the future of retail, offering opportunities for personal growth, skill development, and innovation at scale. Flexible Work Approach: Our hybrid work model combines in-office and virtual presence, ensuring collaboration, flexibility, and personal development opportunities across our global team. Benefits: In addition to competitive compensation, we offer incentive awards, best-in-class benefits, maternity/paternal leave, health benefits, and more. Equal Opportunity Employer: Walmart, Inc. is committed to diversity, inclusivity, and valuing unique identities, experiences, and opinions. We strive to create an inclusive environment where all individuals are respected and valued. Minimum Qualifications: - Bachelor's degree in computer science or related field with 5 years of experience in software engineering or 7 years of experience in software engineering with 2 years of supervisory experience. Preferred Qualifications: - Master's degree in computer science or related field with 3 years of experience in software engineering. Location: Pardhanani Wilshire II, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli, India R-1998235.,

Posted 1 month ago

Apply

2.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

We re looking for a Staff DevOps Engineer to join Procore s Field DevOps Team . In this role, you ll be a key contributor in architecting and maintaining our CI/CD pipelines and infrastructure for native Android, iOS and cross platform mobile applications. Your primary goal will be to ensure the efficient, reliable, and secure delivery of our mobile products. As a Staff DevOps Engineer - Mobile Platform , you ll partner with mobile developers, QA engineers, and security teams to automate and optimize our build, test, and deployment processes. Use your deep understanding of delivery systems, strong infrastructure automation skills, and proficiency in cloud platforms to significantly improve our release velocity, enhance system stability, and strengthen our security posture . Join a team that is passionate about innovation and making a real impact on how the construction industry builds Apply today. This position will be based in our Bangalore office. We re looking for someone to join us immediately. What you ll do: Architect and implement advanced CI/CD pipelines for Mobile Applications. Design, implement, and manage our AWS infrastructure using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Optimize and scale our cloud infrastructure to ensure high performance, reliability, and cost-effectiveness. Automate manual processes across the development and deployment lifecycle using Python and other scripting languages. Establish and maintain comprehensive monitoring and alerting systems to proactively identify and resolve issues.Collaborate with cross-functional teams including developers, product managers, and QA engineers to identify delivery bottlenecks. Integrate security best practices and compliance requirements into our CI/CD pipelines and infrastructure configurations.Collaborate closely with mobile development teams to understand their needs and provide effective DevOps solutions and support. Mentor junior engineers on DevOps best practices, tools, and technologies. Manage access to our systems and devs accounts. What we re looking for: Bachelor s degree in Computer Science, Engineering, or a related field. 8+ years of experience in DevOps or a related engineering role with a focus on CI/CD and infrastructure. Deep expertise with at least one major CI/CD platform (CircleCI, Jenkins, GitLab CI,) and GitHub Actions. Strong understanding of mobile build and release processes for native Android (Gradle) and iOS (Xcode, Fastlane). Proven proficiency in at least one scripting language, preferably Python, for automation. Experience with monitoring tools like DataDog, Honeycomb or New Relic.Experience with AWS. Solid understanding of Infrastructure as Code (IaC) principles and experience with tools like Terraform or CloudFormation. Excellent communication and collaboration skills.

Posted 1 month ago

Apply

5.0 - 6.0 years

7 - 8 Lacs

Pune

Work from Office

Role: DevOps Engineer Experience: 3+ Years Shift Timing - Rotational Company Overview We are a global empathy-led technology services company where software and people transformations go hand-in-hand. Product innovation and mature software engineering are part of our core DNA. Our mission is to help our customers accelerate their digital journeys through a global, diverse, and empathetic talent pool following outcome-driven agile execution. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day. We continue to invest in our digital strategy, design, cloud engineering, data, and enterprise AI capabilities required to bring a truly integrated approach to solving our clients most ambitious digital journey challenges. About We are seeking a highly skilled and motivated Junior DevOps to join our team and oversee our 24/7 DevOps support operations. In this role, you will lead a team of DevOps engineers responsible for ensuring the seamless operation of our platforms and applications. You will play a key role in implementing and maintaining our infrastructure, automating deployments, and optimizing our systems for performance and reliability. You will also be a champion for DevOps best practices and drive continuous improvement within the team. Responsibilities: 1. 3+ years of experience in DevOps and cloud environments. 2. Expertise in containerization (Docker, Kubernetes), CI/CD pipelines, infrastructure-as-code (Terraform), and monitoring tools (Grafana, Prometheus). 3. Strong understanding of cloud platforms (AWS, Google Cloud, IBM Cloud) and networking concepts. 4. Proficiency in scripting languages (Python, Bash) and configuration management tools (Ansible, Puppet, Chef). 5. Excellent communication and collaboration skills. Technology and Tools 1. Monitoring: Grafana, Prometheus, Alertmanager 2. Incident Management: Squadcast 3. Infrastructure as Code: Terraform 4. Configuration Management: Ansible, Puppet, or Chef (based on Zellos preference) 5. CI/CD: Jenkins, GitLab CI, or CircleCI (based on Zellos preference) 6. Logging: ELK stack, Splunk 7. Cloud Platforms: AWS, Google Cloud, IBM Cloud

Posted 1 month ago

Apply

1.0 - 3.0 years

10 - 11 Lacs

Bengaluru

Work from Office

As a DevOps Engineer II, you will contribute actively to developing, implementing, and maintaining automated cloud infrastructures on AWS. This role requires a hands-on approach, collaborating with teams, and assisting in implementing DevOps best practices to streamline system operations. Responsibilities: Infrastructure Support: Assist in maintaining AWS-based infrastructure under supervision using CloudFormation or similar IaC tools. CI/CD Assistance: Contribute to the maintenance and enhancement of CI/CD pipelines for software deployment and delivery. Monitoring Assistance: Support setting up and managing monitoring tools such as CloudWatch, DataDog, or equivalent for system monitoring. Scripting Support: Assist in scripting tasks using Python, Bash, or similar languages for automation. Collaboration: Work closely with senior team members and peers to implement infrastructure changes and provide support as needed. Issue Identification: Identify and report infrastructure issues to ensure system stability and reliability. Qualifications: Experience: 1 to 3 years in DevOps or related roles focusing on cloud infrastructure. CloudFormation Exposure: Basic exposure to CloudFormation or equivalent tools for provisioning and managing AWS resources. CI/CD Understanding: Basic understanding of CI/CD concepts and an eagerness to contribute to their improvement. Monitoring Tools Familiarity: Basic knowledge of monitoring tools like CloudWatch, DataDog, or similar for system monitoring. Scripting Awareness: Basic scripting skills in languages like Python, Bash, etc., with a willingness to expand knowledge. AWS Services Familiarity: Basic understanding of various AWS services and their functionalities. Problem-solving Aptitude: Willingness to learn and solve technical challenges collaboratively. Team Collaboration: Ability to work within a team, learn from experienced members, and effectively communicate technical concepts. Continuous Learning Attitude: Candidates should showcase enthusiasm for learning and growing their skills within a dynamic environment, embracing opportunities to deepen their knowledge in AWS and DevOps practices. As a DevOps Engineer II, you will contribute actively to developing, implementing, and maintaining automated cloud infrastructures on AWS. This role requires a hands-on approach, collaborating with te...

Posted 1 month ago

Apply

1.0 - 3.0 years

5 - 6 Lacs

Mumbai, Navi Mumbai

Work from Office

Profile - L1 Application & Production Support Engineer Exp- 2+yrs Location- Navi mumbai Budget Details- 5-6 lpa Notice Period- Immediate or 15 days Key Responsibilities: Provide L1/L2 application and production support for enterprise systems. Perform patch deployments, version upgrades, and coordinate application releases. Analyze application and system logs to troubleshoot and resolve incidents. Monitor performance and health of applications using AppDynamics, VuNet, etc. Manage and track issues using Jira or other ticketing tools. Support and troubleshoot on Windows Server, Linux, IIS, and Apache Tomcat. Execute basic database queries and issue handling in MSSQL and Oracle environments. Follow up with internal teams for timely VAPT issue closure. Coordinate with network teams for resolving issues related to: Firewall rules Load balancer configuration DNS resolutions Required Technical Skills: OS: Windows Server and Linux (basic command line & troubleshooting) Web Servers: IIS and Apache Tomcat Databases: MSSQL, Oracle (basic support and query execution) Monitoring Tools: AppDynamics, VuNet (or similar APM tools) Ticketing Tools: Jira (incident, change, and problem tracking) Log Analysis: Ability to analyze logs from web servers, app servers, DBs Networking: Basic understanding of firewall, load balancer, DNS Communication: Clear and professional interaction with cross-functional teams Nice to Have: ITIL awareness (incident, change, and problem management) Scripting knowledge (Shell, PowerShell) Experience in 24x7 production support environment Familiarity with change/request/release processes. Apply Now Apply For Job

Posted 1 month ago

Apply

2.0 - 7.0 years

8 - 12 Lacs

Bengaluru

Work from Office

ResMed has always applied the best of technology to improve peoples lives. Now our SaaS technology is fueling a new era in the healthcare industry, with dynamic systems that change the way people receive care in settings outside of the hospital-and tools that work every day to help people stay well, longer. We have one of the largest actionable datasets in the industry, creating a complete view of people as they move between care settings. This is how we empower providers-with vital insight to deliver the care people need, right when they need it. As a DevOps engineer, the position will be responsible for Build DevOps engineering capabilities across ResMed SaaS organization Enable application teams to adopt to DevOps culture by setting up end to end CI/CD pipelines to build Infrastructure and application deployments. Support application teams to migrate the applications from on-prem to cloud environments Provide SRE support for all the DevOps tools owned by ResMed SaaS organization Adopt to new tools and technologies and make recommendations for continuous improvement of DevOps processes Work closely with Enterprise Platform, Networking, and security teams to follow the best practices and compliance standards across the infrastructure and deployment pipelines. Work independently with minimum guidance Required Qualifications: Bachelor s degree in a technical field with 2+ years or equivalent experience Strong in CICD tools like GitHub Actions/Azure DevOps/Gitlab-CI Experience in provisioning infrastructure using Terraform on AWS cloud Worked with Docker and container platform Kubernetes/EKS/AKS. Working experience in configuration management tools like Ansible/chef Good in logging and monitoring tools like ELK/Splunk/Datadog Knowledgeable in any of the security scanning tools for SAST, SCA, IAC and container scanning We commit to respond to every applicant.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Bengaluru

Work from Office

Job Title : Kafka Integration Specialist We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 1 month ago

Apply

7.0 - 8.0 years

19 - 22 Lacs

Mumbai

Work from Office

Role Responsibilities : - Design and implement scalable Azure DevOps solutions. - Develop Continuous Integration and Continuous Deployment (CI/CD) pipelines. - Automate infrastructure provisioning using Infrastructure as Code (IaC) practices. - Collaborate with software development teams to enhance product delivery. - Monitor system performance and optimize resource utilization. - Ensure application security and compliance with industry standards. - Lead DevOps transformations and best practices implementation. - Provide technical guidance and support to cross-functional teams. - Identify and resolve technical issues and bottlenecks. - Document and maintain architecture designs and deployment procedures. - Stay updated with the latest technologies and advancements in Azure. - Facilitate training sessions for team members on DevOps tools. - Engage with stakeholders to gather requirements and feedback. - Participate in planning and estimation activities for projects. - Contribute to a culture of continuous improvement and innovation. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - Minimum of 7 years of experience in DevOps engineering. - Proven experience with Azure DevOps tools and services. - Strong knowledge of CI/CD tools such as Azure Pipelines, Jenkins, or GitLab CI. - Experience with Infrastructure as Code tools such as Terraform or ARM Templates. - Hands-on experience with containerization technologies like Docker and Kubernetes. - Solid understanding of cloud architecture and deployment strategies. - Proficiency in scripting languages such as PowerShell, Bash, or Python. - Familiarity with Agile methodologies and practices. - Experience with monitoring tools like Azure Monitor or Grafana. - Excellent communication and collaboration skills. - Strong analytical and problem-solving abilities. - Ability to work independently in a remote team environment. - Certifications in Azure (e.g., Azure Solutions Architect Expert) are a plus. - A background in software development is advantageous.

Posted 1 month ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Mumbai

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 1 month ago

Apply

7.0 - 8.0 years

9 - 10 Lacs

Bengaluru

Work from Office

Job Title : Azure DevOps Architect (7+ years) Role Responsibilities : - Design and implement scalable Azure DevOps solutions. - Develop Continuous Integration and Continuous Deployment (CI/CD) pipelines. - Automate infrastructure provisioning using Infrastructure as Code (IaC) practices. - Collaborate with software development teams to enhance product delivery. - Monitor system performance and optimize resource utilization. - Ensure application security and compliance with industry standards. - Lead DevOps transformations and best practices implementation. - Provide technical guidance and support to cross-functional teams. - Identify and resolve technical issues and bottlenecks. - Document and maintain architecture designs and deployment procedures. - Stay updated with the latest technologies and advancements in Azure. - Facilitate training sessions for team members on DevOps tools. - Engage with stakeholders to gather requirements and feedback. - Participate in planning and estimation activities for projects. - Contribute to a culture of continuous improvement and innovation. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - Minimum of 7 years of experience in DevOps engineering. - Proven experience with Azure DevOps tools and services. - Strong knowledge of CI/CD tools such as Azure Pipelines, Jenkins, or GitLab CI. - Experience with Infrastructure as Code tools such as Terraform or ARM Templates. - Hands-on experience with containerization technologies like Docker and Kubernetes. - Solid understanding of cloud architecture and deployment strategies. - Proficiency in scripting languages such as PowerShell, Bash, or Python. - Familiarity with Agile methodologies and practices. - Experience with monitoring tools like Azure Monitor or Grafana. - Excellent communication and collaboration skills. - Strong analytical and problem-solving abilities. - Ability to work independently in a remote team environment. - Certifications in Azure (e.g., Azure Solutions Architect Expert) are a plus. - A background in software development is advantageous.

Posted 1 month ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Kolkata

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Pune

Work from Office

So, what s the role all about The Prompt Engineer optimizes prompts to generative AI models across NiCEs Illuminate applications. As part of the Illuminate Research team, the Prompt Engineer works with several groups in the business to help our applications deliver the highest quality customer experience. The Prompt Engineer partners with global development teams to help diagnose and resolve prompt-based issues. This includes helping to define and execute tests for LLM-based systems that are difficult to evaluate with traditional test automation tools. The Prompt Engineer also helps educate the development teams on advances in prompt engineering and helps update production prompts to evolving industry best practices. How will you make an impact Regularly review production metrics and specific problem cases to find opportunities for improvement. Help diagnose and resolve issues with production prompts in English. Refine prompts to generative AI systems to achieve customer goals. Collect and present quantitative analysis on solution success. Work with application developers to implement new production monitoring tools and metrics. Work with architects and Product Managers to implement prompts to support new features. Meet regularly with teams working in United States Mountain and Pacific time zones (UTC-7:00 and UTC-8:00). Review new prompts and prompt changes with Machine Learning Engineers. Consult with Machine Learning Engineers for more challenging problems. Stay informed about new advances in prompt engineering. Have you got what it takes Fluent in written and spoken English. BS in technology-related field such as computer science, business intelligence/analytics, or finance. 5- 7 years work experience in a technology-related industry or position. Familiarity with best practices in prompt engineering, to include differences in prompts between major LLM vendors. Ability to develop and maintain good working relationships with cross-functional teams. Ability to clearly communicate and present to internal and external stakeholders. Experience with Python and at least one web app framework for prototyping, e. g. , Streamlit or Flask. What s in it for you Enjoy NiCE-FLEX! Requisition ID: 7815 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE

Posted 1 month ago

Apply

10.0 - 12.0 years

11 - 15 Lacs

Bengaluru

Work from Office

About the Role : We are seeking an experienced and highly skilled Senior AWS Engineer with over 10 years of professional experience to join our dynamic and growing team. This is a fully remote position, requiring strong expertise in serverless architectures, AWS services, and infrastructure as code. You will play a pivotal role in designing, implementing, and maintaining robust, scalable, and secure cloud solutions. Key Responsibilities : - Design & Implementation : Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications leveraging a wide array of AWS services, with a strong focus on serverless architecture and event-driven design. - AWS Services Expertise : Architect and develop solutions using core AWS services including AWS Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. - Infrastructure as Code (IaC) : Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent, repeatable, and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD. - Serverless & Event-Driven Design : Champion serverless patterns and event-driven architectures to build highly efficient and decoupled systems. - Cloud Monitoring & Observability : Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray, and custom metrics to proactively identify and resolve issues, ensuring optimal application performance and health. - Security & Compliance : Enforce stringent security best practices, including the establishment of robust IAM roles and boundaries, PHI/PII tagging, secure configurations with Cognito and KMS, and adherence to HIPAA standards. Implement isolation patterns and fine-grained access control mechanisms. - Cost Optimization : Proactively identify and implement strategies for AWS cost optimization, including S3 lifecycle policies, leveraging serverless tiers, and strategic service selection (e.g., evaluating Amazon Pinpoint vs. SES based on cost-effectiveness). - Scalability & Resilience : Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs), retry/backoff mechanisms, and circuit breakers to ensure high availability and fault tolerance. - CI/CD Pipeline : Contribute to the design and evolution of CI/CD pipelines, ensuring automated, efficient, and reliable software delivery. - Documentation & Workflow Design : Create clear, concise, and comprehensive technical documentation for architectures, workflows, and operational procedures. - Cross-Functional Collaboration : Collaborate effectively with cross-functional teams, including developers, QA, and product managers, to deliver high-quality solutions. - AWS Best Practices : Advocate for and ensure adherence to AWS best practices across all development and operational activities. Required Skills & Experience : of hands-on experience as an AWS Engineer or similar role. - Deep expertise in AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS. - Proficiency in Infrastructure as Code (IaC) with AWS CDK; experience with CodePipeline is a significant plus. - Extensive experience with Serverless Architecture & Event-Driven Design. - Strong understanding of Cloud Monitoring & Observability tools : CloudWatch Logs, X-Ray, Custom Metrics. - Proven ability to implement and enforce Security & Compliance measures, including IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, and Access Control. - Demonstrated experience with Cost Optimization techniques (S3 lifecycle policies, serverless tiers, service selection). - Expertise in designing and implementing Scalability & Resilience patterns (auto-scaling, DLQs, retry/backoff, circuit breakers). - Familiarity with CI/CD Pipeline Concepts. - Excellent Documentation & Workflow Design skills. - Exceptional Cross-Functional Collaboration abilities. - Commitment to implementing AWS Best Practices.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Location: Bangalore Experience: 3 - 5 Years The Skills that are Key to this role Azure Virtual Machines, Azure Virtual Desktop (AVD) Hosting and Support working experience Expertise on Citrix technologies (web interface, storefront, Director, VDA, Receiver),Citrix XenDesktop, VMware, NetScaler, AppSense, Windows endpoint (thin & thick clients) technologies and PowerShell. Excellent troubleshooting skills on infrastructure related issues. Fundamentals Ability to handle both the infrastructures with minimal training. Knowledge in App-V, AppDNA and other associated technologies. Basic knowledge on performance monitoring tools like Splunk, Kibana will be a value add. Knowledge on networking and storage is an added advantage. Analyze problems within the environment and drive to root cause resolution. Coordinate with various infra and app support groups to mitigate issues. Managing virtual machines hosted on VMware ESX Environment. Managing ICA Policies in XenApp and Xen-Desktop. Technical / Behavioral Technical Skills Azure Cloud - Desktop and AVD technologies Working with Citrix technologies (web interface, storefront, Director, VDA, Receiver), Citrix XenApp and Xen Desktop, CVAD Working with VMware, NetScaler, AppSense, Windows endpoint (thin & thick clients) technologies and PowerShell The Skills that are Good To Have for this role- Performance monitoring tools like Splunk, Kibana Citrix Cloud, AWS Workspaces Knowledge on networking and storage is an added advantage

Posted 1 month ago

Apply

6.0 - 12.0 years

8 - 14 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job Description: Job Description: RPA (Blue Prism) Developer Location Gurugram Work Model Hybrid (3 days mandatory work from home) Overview We are seeking an experienced Blue Prism Developer to join our dynamic team. The ideal candidate will have a strong background in Blue Prism, .NET programming, Powershell, and a solid understanding of reusability concepts and error handling. This role is perfect for someone who is innovative, eager to learn new technologies quickly, and can effectively apply their technical knowledge and experience. Key Responsibilities Design, build, and test applications using Blue Prism and one other programming language. Manage process scheduling and monitor processes via control room. Utilize excellent debugging skills to troubleshoot and resolve issues efficiently. Implement and manage data gateways. Provide guidance and support to junior team members. Ensure adherence to best practices in error handling and reusability concepts. Work with monitoring tools like Sumo and Apps for optimal performance. Support and maintain applications, including Prod Support and on-call support activities. Required Qualifications 6 - 12 years of strong technical expertise in Blue Prism and one programming language. Certified in Blue Prism. Strong understanding of powershell capabilities. Proficient in SOAP and Rest APIs. Good knowledge of database management (SQL Server) and ability to write DB queries. Experience with DevOps and agile ecosystems. Basic programming skills and a good understanding advanced BP functions. Preferred Qualifications Exposure to monitoring and alerting tools like Sumo/Dynatrace. Domain/Functional knowledge in BFSI. Personal Attributes Innovative thinker with the ability to learn and use new technologies effectively. Strong problem-solving skills and excellent debugging capabilities. Ability to work independently and as part of a team. Excellent communication and interpersonal skills. Important Details for the Role: Location: The selected candidate will work from the clients office in Cybercity, Gurugram, three days a week. Shift Timings: Regular shift is from 6:30am to 3:30pm. Note that Genpact and the client will not provide transport or parking facilities. Alternate Week Schedule: Every alternate week, candidates must log in at 3:30am for two days. They can choose to work from home on these days. On-call Support: Once a week every month, candidates will be required to provide on-call support and should be flexible to do this 24/7. They can work from home during this week. Additional Sills:

Posted 1 month ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Pune

Work from Office

Essential Duties and Responsibilities Manage and support IAM systems, including user provisioning, role management, and access reviews. Ensure adherence to security policies and compliance requirements. Monitor the performance, availability, and capacity of systems and infrastructure, including SAP and Salesforce platforms. Resolve escalated technical issues and coordinate with internal teams and vendors as needed. Design, develop, and implement automation scripts using tools such as PowerShell, Python, or JavaScript to streamline processes, reduce manual tasks, and improve overall efficiency. Lead the investigation and resolution of complex incidents, including those related to SAP and Salesforce. Conduct root cause analysis and provide recommendations for permanent fixes and process improvements. Develop and maintain operational procedures and documentation for IT systems, including SAP and Salesforce platforms. Identify opportunities for process automation and service improvements to achieve greater reliability and efficiency. Work closely with security, development, and IT support teams to ensure the effective delivery of services and solutions across SAP, Salesforce, and other critical business platforms. Communicate technical information clearly and effectively to both technical and non-technical stakeholders. Participate in change management activities, ensuring that all changes to IT systems, infrastructure, and SAP/Salesforce environments are properly reviewed, tested, and documented. Evaluate recommend new tools and technologies to improve operational performance. Lead or participate in the implementation of new systems or upgrades, especially those involving SAP and Salesforce integration. Knowledge, Skills, and/or Abilities Strong understanding of IAM concepts, cloud services, and IT infrastructure (e.g., servers, databases, networking). Proficiency in automation and scripting using languages such as PowerShell, Python, or JavaScript. Experience developing complex automation solutions for system and application management. Proven ability to diagnose and troubleshoot complex technical issues in a multi-tiered environment, including issues related to SAP and Salesforce. Experience in identifying process inefficiencies and implementing solutions that enhance reliability and productivity. Ability to manage multiple tasks and projects simultaneously, while maintaining attention to detail and meeting deadlines. Excellent verbal and written communication skills, with the ability to translate technical information into business language. Knowledge of security best practices and compliance requirements (e.g., SOX, GDPR, HIPAA) as they relate to IAM, SAP, Salesforce, and IT operations.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies