Home
Jobs

2965 Provisioning Jobs - Page 29

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Key Responsibilities Monitor network performance and health using NMS and other monitoring tools. Provide L1 support for customer and internal network issues (via phone, email, or ticketing system). Escalate unresolved issues to L2/L3 engineers in a timely manner with detailed diagnostics. Perform basic troubleshooting of routers, switches, wireless links, and fiber networks. Coordinate with field teams and vendors for issue resolution and service restoration. Maintain detailed logs of incidents, escalations, and actions taken. Follow standard operating procedures (SOPs) and ensure SLA adherence. Assist in provisioning and configuration of new customer connections (under supervision). Provide updates and reports to clients and internal stakeholders on incident status. Work in shifts to ensure 24x7 support availability. About Company: Our dedicated team at Optimal Telemedia is committed to your success, enhanced productivity, and customer satisfaction. We serve as a valuable resource with enthusiasm and diligence for businesses seeking guidance and support. Your satisfaction is our priority for us, so we deliver unique solutions that meet your business needs and high standards in telecommunications and technology. Our systems are designed in a way that seamlessly integrates with your operations, regardless of your location, and ensures reliable service delivery that follows strict protocols. We have you covered whether you require consultation, design, support, implementation or maintenance services. Your business deserves the best and at Optimal, we're here to provide it. Show more Show less

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Nagpur, Maharashtra, India

On-site

Linkedin logo

Skills: MIS preparation, AutoCAD, Asset Management, Budget & Cost Control, coordination, Purchase requisitions & Purchase Orders, Expense Reporting, Mandatory Skills Proficiency in MS Word/, PowerPoint/Excel/Project/Visio Excellent verbal and written communication skills. Soft skills Interpersonal relationship management, Time Management, etc. Responsibilities Asset Management Updation of Asset Master and O&M history for Assets. Contracts Management Co-ordination with vendors for AMCs/ARCs Co-ordination with vendors for HR / IR Compliances Tracking expiry of Contracts/Warranties, etc. Budget & Cost Control (Capex & Opex) Preparation, monitoring & control of Opex & Capex proposals and budgets. Preparation of MIS (Daily/Weekly/Monthly) General Receive Specs from the Project Team to compile and forward to Vendors for quotations. Obtain quotations from various Vendors. Co-ordination with the Procurement and Commercial team for releasing Purchase requisition and Purchase Orders. Co-ordination with Warehouse/Stores for availability & delivery of material to various locations Release of work orders to respective teams for ensuring completion of provisioning activities. Updation/Modification of all associated records (Documents/Drawings/Tracking Sheets) Maintain Annual Maintenance Contract (AMC) Equipment Track Sheet for all IDCs for monitoring Warrantee of Equipment & Renewals of AMC. Maintain records and Release Reports related to Budgetary/Procurement/Material availability (Purchase requisitions & Purchase Orders etc.) Tracking Budget month month-wise/discipline-wise/location-wise Who can apply? Graduate in any stream, preferably in Commerce. Experience of 3 to 4 Years in MIS preparation & Autocad Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled and hands-on Data Engineer to join Controls Technology to support the design, development, and implementation of our next-generation Data Mesh and Hybrid Cloud architecture. This role is critical in building scalable, resilient, and future-proof data pipelines and infrastructure that enable the seamless integration of Controls Technology data within a unified platform. The Data Engineer will work closely with the Data Mesh and Cloud Architect Lead to implement data products, ETL/ELT pipelines, hybrid cloud integrations, and governance frameworks that support data-driven decision-making across the enterprise. Key Responsibilities: Data Pipeline Development: Design, build, and optimize ETL/ELT pipelines for structured and unstructured data. Develop real-time and batch data ingestion pipelines using distributed data processing frameworks. Ensure pipelines are highly performant, cost-efficient, and secure. Apache Iceberg & Starburst Integration: Work extensively with Apache Iceberg for data lake storage optimization and schema evolution. Manage Iceberg Catalogs and ensure seamless integration with query engines. Configure and maintain Hive MetaStore (HMS) for Iceberg-backed tables and ensure proper metadata management. Utilize Starburst and Stargate to enable distributed SQL-based analytics and seamless data federation. Optimize performance tuning for large-scale querying and federated access to structured and semi-structured data. Data Mesh Implementation: Implement Data Mesh principles by developing domain-specific data products that are discoverable, interoperable, and governed. Collaborate with data domain owners to enable self-service data access while ensuring consistency and quality. Hybrid Cloud Data Integration: Develop and manage data storage, processing, and retrieval solutions across AWS and on-premise environments. Work with cloud-native tools such as AWS S3, RDS, Lambda, Glue, Redshift, and Athena to support scalable data architectures. Ensure hybrid cloud data flows are optimized, secure, and compliant with organizational standards. Data Governance & Security: Implement data governance, lineage tracking, and metadata management solutions. Enforce security best practices for data encryption, role-based access control (RBAC), and compliance with policies such as GDPR and CCPA. Performance Optimization & Monitoring: Monitor and optimize data workflows, performance tuning of queries, and resource utilization. Implement logging, alerting, and monitoring solutions using CloudWatch, Prometheus, or Grafana to ensure system health. Collaboration & Documentation: Work closely with data architects, application teams, and business units to ensure seamless integration of data solutions. Maintain clear documentation of data models, transformations, and architecture for internal reference and governance. Required Technical Skills: Programming & Scripting: Strong proficiency in Python, SQL, and Shell scripting. Experience with Scala or Java is a plus. Data Processing & Storage: Hands-on experience with Apache Spark, Kafka, Flink, or similar distributed processing frameworks. Strong knowledge of relational (PostgreSQL, MySQL, Oracle) and NoSQL databases (DynamoDB, MongoDB). Expertise in Apache Iceberg for managing large-scale data lakes, schema evolution, and ACID transactions. Experience working with Iceberg Catalogs, Hive MetaStore (HMS), and integrating Iceberg-backed tables with query engines. Familiarity with Starburst and Stargate for federated querying and cross-platform data access. Cloud & Hybrid Architecture: Experience working with AWS data services (S3, Redshift, Glue, Athena, EMR, RDS). Understanding of hybrid data storage and integration between on-prem and cloud environments. Infrastructure as Code (IaC) & DevOps: Experience with Terraform, AWS CloudFormation, or Kubernetes for provisioning infrastructure. CI/CD pipeline experience using GitHub Actions, Jenkins, or GitLab CI/CD. Data Governance & Security: Familiarity with data cataloging, lineage tracking, and metadata management. Understanding of RBAC, IAM roles, encryption, and compliance frameworks (GDPR, SOC2, etc.). Required Soft Skills: Problem-Solving & Analytical Thinking - Ability to troubleshoot complex data issues and optimize workflows. Collaboration & Communication - Comfortable working with cross-functional teams and articulating technical concepts to non-technical stakeholders. Ownership & Proactiveness - Self-driven, detail-oriented, and able to take ownership of tasks with minimal supervision. Continuous Learning - Eager to explore new technologies, improve skill sets, and stay ahead of industry trends. Qualifications: 4-6 years of experience in data engineering, cloud infrastructure, or distributed data processing. Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Hands-on experience with data pipelines, cloud services, and large-scale data platforms. Strong foundation in SQL, Python, Apache Iceberg, Starburst, and cloud-based data solutions (AWS preferred), Apache Airflow Orchestration ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Data Architecture ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Desuri, Rajasthan, India

On-site

Linkedin logo

What we offer Home About Services Back IT Consulting DevSecOps DevOps Site Reliability Engineering Cloud Advancement Managed Kubernetes Services Platform Engineering Infrastructure Automation Data Intelligence and Innovation Cloud Native Architecture Internet of Things (IoT) SAP DevEx Need different solutions? BerryBytes scalable solutions adapt to your needs, ensuring robust growth without compromise. Talk to sales Learn more about 01Cloud Protection Against Cyber Threats Scalable and Tailored Solutions Expert Guidance and Support Careers Events News Contact DevOps Engineer (AWS Solution Architect) DevOps Engineer (AWS Solution Architect) Job Category: Infrastructure Engineering Job Type: Full Time Job Location: India & Nepal Reports To : Director of Cloud Infrastructure We are looking for a DevOps Engineer to be responsible for our infrastructure and deployments in our Multi-cloud environments. As a member of our engineering team, you will be in involved all things DevOps/SysOps/MLOps. You’ll be responsible for planning and building tools for system configuration and provisioning. This role also will be responsible for maintaining any required infrastructure SLAs both internal and external to the business. Our team is extremely collaborative. Interested candidates must be self-motivated, willing to learn, and willing to share new ideas to improve our team and process. Responsibilities Performs technical maintenance of the configuration management tools and release engineering practices to ensure technical changes are documented, comply with standard configurations, and are sustainable. Designs develops, automates, and maintains tools using an automate-first mindset to improve the quality and repeatability of software and infrastructure configuration development and deployment. Will train software developers and system administrators in the use of pipeline tools and the implementation of quality standards. Oversee integration work & provide automated solutions in support of multiple products. Provide technical leadership, lead code reviews and mentor other developers. Build systems that dynamically scale. Plan deployment. Requirements Experience with AWS and GCP. Hands of experience in Kubernetes (at least 2years of K8s experience.) Minimum 3+ years experience with Unix based systems. Working knowledge of Ansible, or other Configuration Management. Experience in leading scripting tools (Python/Ruby/Bash etc). Experience with Jenkins or Cloud Native CI/CD. Strong scripting and automation skills. Solid understanding of web applications. Experience in Windows and LInux Automations using ANsible or similar. Excellent hands on skill in Terraform and CloudFormation. Great To Have Experience with Terraform Experience with Azure AWS Solution Arch (Pro) or DevOps Engineer (Pro) Experience with continuous deployments (CD) Experience with cloud-based autoscaling and elastic sizing Experience with relational database administration and SQL Experience with Redis, MongoDB, Memcached, Cassandra, or other non-relational storage Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume *Allowed Type(s): .pdf By using this form you agree with the storage and handling of your data by this website. * Get the latest BerryBytes updates by subscribing to our Newsletter! Unleash Your Potential with Cloud Native Solutions Contact Us Navigation Home About Careers Events News Contact Services IT Consulting DevSecOps DevOps Site Reliability Engineering Cloud Advancement Managed Kubernetes Services Platform Engineering Services Infrastructure Automation Data Intelligence and Innovation Cloud Native Architecture Internet Of Things (IoT) SAP DevEx Legal Terms & Conditions Cookie Policy Privacy Policy Copyright © 2025 BerryBytes. All Rights Reserved. Designed & Built by Wattdot What we offer Home About Services Back IT Consulting DevSecOps DevOps Site Reliability Engineering Cloud Advancement Managed Kubernetes Services Platform Engineering Infrastructure Automation Data Intelligence and Innovation Cloud Native Architecture Internet of Things (IoT) SAP DevEx Need different solutions? BerryBytes scalable solutions adapt to your needs, ensuring robust growth without compromise. Talk to sales Learn more about 01Cloud Protection Against Cyber Threats Scalable and Tailored Solutions Expert Guidance and Support Careers Events News Contact Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Experience 8+ Years Job Purpose Seeking a Senior Cloud/DevOps Engineer with deep expertise in Microsoft Azure to design, implement, and optimize scalable, secure cloud-based solutions. The role drives continuous delivery, improves deployment frequency, and enhances platform reliability through infrastructure automation and DevSecOps practices. Requires collaboration across teams to ensure best practices in IaC, monitoring, security compliance, and cost optimization. Job Description / Duties And Responsibilities Collaborate with development teams to establish and enhance continuous integration and delivery (CI/CD) pipelines, including source code management, build processes, and deployment automation. Publish and disseminate CI/CD best practices, patterns, and solutions. Design, configure, and maintain cloud infrastructure components using platforms such as Azure, AWS, GCP, and other cloud providers. Implement and manage infrastructure as code (IaC) using tools like Terraform or Bicep or CloudFormation to ensure consistency, scalability, and repeatability of deployments. Monitor and optimize cloud-based systems, addressing performance, availability, and scalability issues. Implement and maintain containerization and orchestration technologies like Docker and Kubernetes to enable efficient deployment and management of applications. Collaborate with cross-functional teams to identify and resolve operational issues, troubleshoot incidents, and improve system reliability. Establish and enforce security best practices and compliance standards for cloud infrastructure and applications. Automate infrastructure provisioning, configuration management, and monitoring tasks using tools like Ansible, Puppet, or Chef. Ensure that the service’s uptime and response time SLAs/OLAs are met or surpassed. Build or maintain CI/CD building blocks and shared libraries proactively for app and development teams to enable quicker build and deployment. Actively participate in bridge calls with team members and contractors/vendors to prevent or quickly address problems. Troubleshoot, identify, and fix problems in the DevSecOps domain. Ensure incident tracking tools are updated in accordance with established norms and processes, gather all essential data and document any discoveries and concerns. Align with technological Systems/Software Development Life Cycle (SDLC) processes and industry-standard service management principles (such as ITIL). Create and publish engineering platforms and solutions. Job Specification / Skills and Competencies Expertise in any one of Azure/AWS/GCP. Azure is a mandatory requirement. 8+ years of related job experience. Strong experience in containerization and container orchestration technologies – Docker, Kubernetes, etc. Strong Experience with infrastructure automation tools like Terraform/Bicep/CloudFormation, etc. Knowledge of DevOps Automation (Terraform, GitHub, GitHub Actions). Good knowledge of Monitoring/Observability tools and processes inclusive CloudWatch, ELK stack, CloudTrail, Kibana, Grafana, Prometheus. Infra monitoring using Nagios or Zabbix. Experience in working with Operations team in Agile Development model and all SDLC phases. Comprehensive technical expertise in a variety of DevSecOps toolkits, including Ansible, Jenkins, Artifactory, Jira, Black Duck, Terraform, Git/Version Control Software, or comparable technologies. Familiarity with information security frameworks and standards. Exposure to the Render cloud platform is desirable and considered a plus. Familiarity with API Security, Container Security, Azure Cloud Security. Excellent analytical and interpersonal skills. Strong debugging/troubleshooting skills. Any Additional Information/Specifics To adhere to the Information Security Management policies and procedures. Job Location: Kochi Trivandrum Apply for this position Full Name * Email * Phone * Notice Period * Upload CV/Resume *Allowed Type(s): .pdf, .doc, .docx Where all have you seen Experion? (Select all that applies) * News Social Media Job Portals By using this form you agree with the storage and handling of your data by this website. * Prev Post Senior Front-End Developer Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Your IT Future, Delivered. DevOps Engineer With a global team of 5600+ IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. All our locations have have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. Join us at DHL Group Digital Platforms department, where innovation thrives and technology evolves. With a presence across three countries and two continents, we're united by the drive to create a One Stop Shop for all DHL Group APIs. Our startup spirit within a stable, large-scale company framework propels us to integrate Agile and DevSecOps into our secure, efficient, and flexible delivery and support operations. About the Project: Embark on an exciting journey with the DHL Developer Portal, a pivotal platform for API system integration. This front-end solution is designed to enhance the logistics experience for both internal and external customers, making API documentation and integration more accessible and user-friendly. We are leveraging cutting-edge technologies, including React for dynamic user interfaces, Node.js for efficient server-side operations, and PHP for robust back-end support. Additionally, we utilize Google PaaS, Google Apigee for API management, and Google Kubernetes Engine for scalable deployment. Our development process follows the SCRUM methodology, ensuring agility and continuous improvement as we strive to deliver an exceptional user experience. #DHL #DHLITServices #GreatPlace #digitalplatforms #api #APIPlatform Grow together. Your Role as Software Engineer: Design, develop, and maintain high-quality front-end components for the DHL Developer Portal, ensuring an intuitive and responsive user interface that effectively showcases our API documentation. Work closely with the development team, Product Owner, and UX/UI designers to translate business and technical requirements into user-friendly web applications. Ensure great code quality and best practices in front-end development, including code reviews, testing, and optimization for performance and accessibility. Collaborate with API platform & developers to integrate APIs seamlessly into the front-end, ensuring smooth data flow and functionality. Participate in the setup and management of CI/CD pipelines specifically for front-end deployments, ensuring efficient and automated delivery workflows. Engage in troubleshooting and debugging, taking ownership of delivering high-quality code and solutions while actively seeking opportunities to enhance user experience. Embrace continuous learning by exploring new front-end technologies and frameworks to improve our development processes and user experience. Provide support and guidance to users of the Developer Portal, addressing any front-end related issues and gathering feedback for future enhancements. Ready to embark on the journey? Here’s what we are looking for: Strong knowledge of frontend development with focus on JavaScript, React & Node.js web apps with at least 5+ years of experience Understanding of JS automated testing (e.g. Jest, Cypress, Playwright) Scripting skills in languages like Python, Bash, or similar to automate routine tasks. Strong communication skills to effectively collaborate with technical teams and other stakeholders. Proven experience or knowledge in cloud platforms (such as Google Cloud Platform) and DevOps tools (e.g., Jenkins, Kubernetes, Docker). Familiarity with Agile methodologies and willingness to work in an Agile team environment. A minimum of 3-5 years’ experience in IT, with a focus on cloud infrastructure or DevOps. Nice to have: Experience in Java, PHP and Drupal Experience of development for DMS (Document Management Systems) Advanced Cloud Certifications: Certifications in cloud platforms like AWS Certified Solutions Architect, Google Cloud Associate Engineer, or Azure Administrator are beneficial. Infrastructure as Code (IaC): Experience with tools like Terraform or CloudFormation to automate infrastructure provisioning and management. Monitoring and Logging: Familiarity with monitoring tools (e.g., Prometheus, Grafana, CloudWatch) and logging tools (e.g., ELK Stack, Splunk) to ensure system health and quick incident response. Security Best Practices: Knowledge of security principles, including IAM (Identity and Access Management), data encryption, and vulnerability scanning, to maintain a secure infrastructure. Networking Fundamentals: Basic understanding of networking concepts, such as VPC, subnetting, firewalls, and load balancing, to design reliable and efficient cloud architectures. Containerization and Orchestration: Experience with Docker and Kubernetes to manage and scale containerized applications. Scripting and Automation: Advanced skills in scripting languages (e.g., Python, Bash) to automate workflows and infrastructure management. Continuous Integration/Continuous Deployment (CI/CD): Familiarity with creating and maintaining CI/CD pipelines, enhancing development speed and release reliability. Problem-Solving Mindset: Demonstrated ability to troubleshoot and resolve complex infrastructure issues quickly and efficiently. An array of benefits for you: Hybrid work arrangements to balance in-office collaboration and home flexibility. Annual Leave: 42 days off apart from Public / National Holidays. Medical Insurance: Self + Spouse + 2 children. An option to opt for Voluntary Parental Insurance (Parents / Parent -in-laws) at a nominal premium covering pre existing disease. In House training programs: professional and technical training certifications. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Cyber Security-IAM–Consulting- Risk As part of our EY-cyber security team, you shall engage in Identity & Access Management projects in the capacity of execution of deliverables. An important part of your role will be to actively establish, maintain and strengthen internal and external relationships. You’ll also identify potential business opportunities for EY and GTH within existing engagements and escalate these as appropriate. Similarly, you’ll anticipate and identify risks within engagements and share any issues with senior members of the team The opportunity We’re looking for Security Analyst / Consultant in the Risk Consulting team to work on various Identity and Access Management projects for our customers across the globe. Also, the professional shall need to report any identified risks within engagements and share any issues and updates with senior members of the team. In line with EY’s commitment to quality, you’ll confirm that work is of the highest quality as per EY’s quality standards and is reviewed by the next-level reviewer. As an influential member of the team, you’ll help to create a positive learning culture, coach and counsel junior team members and help them to develop. Your Key Responsibilities Engage and contribute to the Identity & Access Management projects Work effectively as a team member, sharing responsibility, providing support, maintaining communication and updating senior team members on progress Execute the engagement requirements, along with review of work by junior team members Help prepare reports and schedules that will be delivered to clients and other interested parties Develop and maintain productive working relationships with client personnel Build strong internal relationships within EY Consulting Services and with other services across the organization Help senior team members in performance reviews and contribute to performance feedback for staff/junior level team members Contribute to people related initiatives including recruiting and retaining IAM professionals Maintain an educational program to continually develop personal skills Understand and follow workplace policies and procedures Building a quality culture at GTH Manage the performance management for the direct reportees, as per the organization policies Foster teamwork and lead by example Training and mentoring of project resources Participating in the organization-wide people initiatives Skills And Attributes For Success Hands-on experience on end to end implementation of Identity and Access Management tool. Completed at least 2-5 implementations. Good understanding of Identity Access Management solutions. Hands-on Java development and debugging experience. Strong Understanding of Java API’s, libraries, methods and good understanding of XML. Should be capable of dissecting large problems and designing modular, scalable solutions. Familiarity with any Java Framework (Struts/ Spring) is an additional advantage. Should be familiar with application servers such as Tomcat and WebLogic. Should have good understanding of RDMS and SQL queries. Hands-on experience in setting up the Identity and Access Management environment in standalone and cluster environment. Hands-on Development experience on Provisioning Workflows, triggers, Rules and customizing the tool as per the requirements. Strong understanding of LDAP (Lightweight Directory Access Protocol). Capability of understanding the business requirements and converting that into design. Good knowledge of information security, standards and regulations. Should be flexible to work on new technologies in IAM domain. Worked in capacity of techno-functional role of Identity and Access Management Implementation. Worked in client facing role. Need to be thorough in their respective tool with hands-on experience involving configuration, implementation & customization. Deployment of web application & basic troubleshooting of web application issues. Need to liaise with Business stakeholders and seek requirement clarification. Should be able to map business requirements to technical specifications. Use case design, Solution Requirements Specification and mapping business requirements to technical requirements (Traceability Matrix). Architecture Design (optimising the resources made available – servers and load sharing etc.). Involvement in a successful pursuit of a potential client by being part of the RFP response team. To qualify for the role, you must have Bachelor or master’s degree in related field or equivalent work experience Strong command on verbal and written English language. Experience in HTML, JSP and JavaScript. Strong interpersonal and presentation skills. 5-7 Years’ Work Experience Saviynt-Senior Security Consultant– IAM 6 years of experience in the field of IT services with over 2 years of experience in Identity and access management Saviynt Implementation experience for various Projects. Engineer, develop and maintain enterprise IAM solutions using Saviynt IGA tool Develop and Build new application Integration, Account and Entitlement Connectors and perform periodic certification reviews in the Saviynt Platform. Design and Develop new access request form in Saviynt based on Business needs. Enhance review definitions, generation of review for quarterly audit Support during New Application onboarding with Saviynt Security Manager (SSM). Experience in development phase for one or more of the Saviynt components - Build Warehouse, Access Request System (ARS), Rule Management, User Provisioning, Access Certification, Identity analytics, Segregation of Duties Possess good knowledge on one or more of the following modules in Saviynt IGA tool: Application Onboarding (Provisioning/De-provisioning), Birth right Provisioning, Application Workflows, Analytics-Reporting Services and Delegation. Good knowledge in the configuring of workflows in Saviynt IGA tool. Good knowledge of configuring birthright rules for the user onboarding workflows. Have involved in creation of XML jobs in Saviynt IGA tool. Verify and ensure users entitlement in an application/platform is appropriate based on an individual’s business role and job function. Remediate access of the users if it is no longer required. Collaborate with other IAM engineers and architects on major initiatives. Be a strong individual contributor who improves IAM service offerings. Develop and maintain IAM technical documentation, code repositories, and development environments. Provide guidance to IAM operations team and serve as escalation point for resolving operational incidents. Operate as a technical subject matter expert and advise project teams regarding integration with IAM technologies. Skills Expertise Saviynt IGA v5.0 or later Knowledge on MySQL. Scripting knowledge like Shell, PowerShell, Perl etc. Good soft skills i.e. verbal & written communication, technical document writing etc. Exposure to global security standards e.g. PCI, SOX, HIPAA etc. Experience in managing small to large sized organization. Prior experience working in remote teams on global scale. Customer orientation skills. Certification : Saviynt L100,L200 Certification (Good to have) ITIL or equivalent (Good to have) Work Requirements: Willingness to travel as required Willingness to be on call support engineer and work occasional overtime as required Willingness to work in shifts as require What We Look For Who has hands on experience in setting up the Identity and Access Management environment in standalone and cluster environment. Who has hands-on Development experience on Provisioning Workflows, triggers, Rules and customizing the tool as per the requirements. What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

A career in our Advisory Service Delivery Centre is the natural extension of PwC’s leading class global delivery capabilities. We provide premium, cost effective, high quality services that support process quality and delivery capability in support for client engagements. Responsibilities As a Senior Associate, you’ll work as part of a team of problem solvers with extensive consulting and industry experience, helping our clients solve their complex business issues from strategy to execution. Specific responsibilities include but are not limited to: Proactively assist in the management of several clients, while reporting to Managers and above Train and lead staff Establish effective working relationships directly with clients Contribute to the development of your own and team’s technical acumen Keep up to date with local and national business and economic issues Be actively involved in business development activities to help identify and research opportunities on new/existing clients Continue to develop internal relationships and your PwC brand Job Description: SAP ABAP with either BODS/HANA/PI/UI5-Fiori Roles/Responsibilities Understand client requirements, provide solutions, functional specifications and implement technical components accordingly. Ability to create Technical Design Documents (TDD) and Unit Test documents for the technical solutions being implemented. Excellent Communication, analytical and Interpersonal skills as a Consultant and play key role in implementations from Blueprint to Go-Live In addition to the above the candidate should have been involved in the following during the life cycle of SAP implementation: Unit Testing, Integration Testing User Support activities Exposure to ASAP and other structured implementation methodologies Regularly interact with the onsite team/client Provide status updates in daily/weekly conference calls Maintain cordial relationship with onsite team/client Required Experience 5 to 9 years of hands on experience in ABAP development 2 years in Odata development using SAP Gateway Strong Knowledge in Forms (SAP Scripts / Smart Forms/Adobe Forms), Reports (ALV / Classical), Interfaces (ALE/IDOC, BAPI), Conversions (LSMW/BDC), Enhancements (User Exits, BADI, Enhancement Spots), Object Oriented ABAP, Workflows (Development, Configuration) Odata ( SAP ODATA Framework, Eclipse IDE and SAP Web IDE, OData service creation and Implementation ) Good experience in building OData services using NetWeaver Gateway and ABAP. Should have done at least 2 SAP Implementation / Rollout projects Familiarity on the basic business processes with any of the following Functional Areas: SAP Financials (FI/CO/PS) SAP Logistics (SD/MM/ PP/PM) SAP HR Should have at least 1 year working experience in either 1 of the below skills: SAP BODS SAP HANA SAP PI/PO SAP UI5/Fiori Bods Details of above combination skills Strong hands on SAP BODS resource with 4+ years of experience. ETL design and implementation involving extraction and provisioning of data from a variety of legacy systems. Should be well versed in design, development and implementation with SAP and non - SAP data sources. End to end implementation experience with at least two full life cycle implementations is a must. At least one SAP BODS 4.1 project implementation experience Experience in Data Migration projects between various application databases Expertise to handle data provisioning and error handling from various sources including Protean, SAP ECC, MS Dynamics and Platinum systems Strong SQL/PL SQL programming skills Performance Tuning and Optimization experience Experience with admin console, designer and server manager tools Pi/Po Strong hands on experience in PI/PO/HCI development Should have at least 4 years hands on experience in using PI, PO to design and build A2A, B2B integrations Should be proficient in developing ESR and IR objects, Graphical and Java mapping and proficient on XML Technologies UI5/Fiori Strong SAP UI5 Developer with real time working experience of 3+ years having worked in Minimum of 3 end to end SAP UI5 Implementations SAP UI5 development experience in developing / enhancing SAPUI5 and SAP Fiori Apps Understand web development framework which includes HTML5, CSS, Javascript and JQuery Experience in developing SAPUI5 solutions using Eclipse and SAP WebIDE Nice To Have Good Experience in SAP UI5/Fiori App development, implementation and configuration Good Experience in SAP HANA - CDS Views Good Experience in using SAP BOPF Framework Education: B.tech, M.tech, MBA, M.com, B.E, B.A, B.com Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Guntur West, Andhra Pradesh, India

On-site

Linkedin logo

Risk Management Group works closely with our business partners to manage the bank’s risk exposure by balancing its objective to maximise returns against an acceptable risk profile. We partner with origination teams to provide financing, investments and hedging opportunities to our customers. To manage risk effectively and run a successful business, we invest significantly in our people and infrastructure. Job Purpose To give effective credit support for meeting the business targets To frame the lending policy framework, processes and workflow, assist in setting up the mortgages business in India To approve credit applications as per DOA judiciously Ensuring that quality of Bank’s exposure in India is acceptable at all time Provide guidance to the team of relationship managers on areas of policy and processes Key Accountabilities Underwiring the files as per laid down policy and process and other controls, etc for CLAP business To recommend / approve credit applications in line with agreed process workflows and policies after highlighting / considering all significant risks To keep track of delinquency levels and ensuring that requisite follow up is being done to ensure that overdues are regularised in a time bound manner To have a sound knowledge of the respective market especially C-LAP customer segment in terms of risks associated with this product Vendor Management (Legal, Technical and others if applicable) To apply knowledge of RBI Regulations/ MAS Guidelines that govern credit dispensation, including the Loan Grading, Provisioning and Asset Classification regulations To have a working knowledge of the general Legal framework in which the bank operates in India and apply the same Job Duties And Responsibilities Set up policy / processes for CLAP Approve Mortgage applications for his respective branch / location / region Portfolio monitoring / tracking & escalation of adverse new events in the portfolio Ensure meticulous compliance with Bank’s internal credit policy as well as regulatory guidelines Ensure compliance with the benchmark Turnaround Time Ensure proper guidance / support to the team of relationship managers. Ensure processing of files within agreed timelines Requirements Six to ten years of experience in mortgages in Consumer Banking (preferably lending to MSME customers) Professional qualification, graduate or post graduate degree, preferably in business, accountancy, economics, finance etc along with sound domain knowledge of the mortgages business and market Core Competencies Good analytical skills Good presentation skills Good interpersonal skills Good knowledge of the industry Technical Competencies Sound knowledge of policies / lending frameworks followed for this product Good knowledge of credit evaluation methods, tools & techniques Sound understanding of regulatory guidelines on credit issued by RBI (local regulations in India), MAS and local laws and regulations that impact businesses in general Knowledge of various banking products and risks associated with them Adequate local knowledge of properties for cities in which he operates Work Relationship Good working relationship with Consumer Banking Relationship team, Consumer Operations, other product teams, RBI inspectors, regulators, and other external agencies. Primary Location India-Andhra Pradesh-Lakshmipuram, Guntur Job Risk Management Schedule Regular Job Type Full-time Job Posting Jun 12, 2025, 9:08:39 PM Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a highly experienced Azure Engineer with a strong foundation in Python scripting , test-driven development (TDD) using PyTest , and end-to-end cloud automation. A key requirement for this role is hands-on experience with Zerto , specifically in the context of cloud migrations and disaster recovery planning. The ideal candidate will also be well-versed in Infrastructure as Code (IaC) using Terraform and Ansible , and have deep operational knowledge of Microsoft Azure services across compute, networking, containers, and monitoring. Roles And Responsibilities Azure Infrastructure Engineering: Architect, deploy, and manage robust Azure environments using services including: Networking: VNet, Subnet, Private Endpoints, VPN Gateway, ExpressRoute, Route Tables, Azure Firewall Compute & Containers: Azure VMs, Azure Kubernetes Service (AKS), Azure Container Apps, Azure Container Registry (ACR) Platform Services: Azure Web Apps, Azure Functions, Azure Automation Monitoring & Logging: Azure Monitor, Application Insights Python Automation & Testing: Develop scalable, testable Python scripts for cloud automation and integrations. Implement test-driven development (TDD) using PyTest to validate automation workflows, infrastructure logic, and monitoring pipelines. Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform and Ansible. Build reusable, parameterized modules aligned with best practices for repeatable, secure deployments. Zerto Implementation & DR Strategy: Lead Zerto-based migration and disaster recovery implementations between on-premises and Azure. Optimize replication, orchestration, and failover strategies using Zerto in hybrid or multi-cloud environments. CI/CD & DevOps Integration: Integrate IaC and automation into Git-based pipelines. Design and support efficient CI/CD workflows that promote velocity, compliance, and observability. Mandatory Skills Deep hands-on expertise with Microsoft Azure cloud services Proficiency in Python with real-world experience in test-driven development using PyTest Strong experience with Zerto for cloud migration, backup, and DR orchestration Infrastructure automation using Terraform and Ansible Solid understanding of Git, version control workflows, and DevOps tooling Strong grasp of Azure networking, compute, and container-based architectures Qualifications Bachelor’s degree in Computer Science, Information Technology, or equivalent Microsoft Azure Certifications (e.g., AZ-104, AZ-400, AZ-305) Familiarity with Agile methodologies and enterprise IT operations Experience with cloud security, RBAC, policies, and compliance frameworks Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary Job title: Data Analytics & Visualization- Consultant About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage, implement & support risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. As part of Digital Internal Audit team, you will be part of our USI Internal Audit practice and will be responsible for scaling digital capabilities for our IA clients. Responsibilities will include helping our clients adopt digital through various stages of their Internal Audit lifecycle, from planning till reporting. Work you’ll do The key job responsibilities will be to: Analyze client requirements, perform gap analysis, design and develop digital solutions using data analytics tools and technologies to solve client needs Utilize critical and analytical thinking and understanding of business issues to solve complex technical and business-related problems Design and develop ETL processes, dynamic and customizable visualization dashboards to suit client requirements on incorporating digital into their day-to-day processes Execute on data management and data analytics engagements to help clients monitor, mitigate, and sustain risks Work with team members to build digital solutions and present them to senior management and/or clients Review deliverables for accuracy and quality Interact with clients and engagement teams to coordinate project work and ensure client satisfaction Provide input with respect to firm's internal technology initiatives and investments Manage personal and professional development including expanding consulting skills, expanding depth and breadth of technical knowledge related to data analytics and visualization Build business and industry knowledge in one or more industries and build team capability in technical / business / industry area Actively participate in practice development activities, such as new project proposals and firm eminence activities. Required Skills 3-5 Years of working experience in data analytics, automation and visualization with expertise in performing ETL operations and competency in a variety of analytics tools and programming languages, such as, but not limited to: Programming/ETL: SQL, Base SAS, Python, R Data Visualization: Tableau, Power BI, QlikView Automation/RPA: UiPath, Power Automate, BluePrism Data Wrangling: Alteryx, Trifacta Good knowledge in data modelling, data ETL, data quality assessment and data profiling is a must Strong data warehousing skills with ability to design robust data models and perform data analytic operations Demonstrate analytical and critical thinking, problem solving and proven troubleshooting skills Experience in data architecture, designing and developing data analytics solutions on cloud platforms such as AWS, Google Cloud, Azure is a plus Understanding of how Internal Audit function works with specific focus on knowledge of Business Processes (Accounts Payable, Accounts Receivable, Procurement Card, Travel and Entertainment, General Ledger, Payroll) and Internal Audit controls (Common BP controls and ITGC Controls such as Terminations, Change Management, User Access Provisioning etc.) Knowledge in other areas such as Cyber, Anti-Money laundering, Fraud, Forensic, Cyber Security or Accounting is a plus Strong communication and interpersonal skills, with the ability to interact at all levels of the organization. Proficient in Microsoft tools such as Excel and Powerpoint, including the ability to create visually compelling and professional presentations, and the ability to effectively present findings /information to diverse audience Preferred Skills Certification in Cloud (AWS/ Azure / GCP) Data analytics or AI/ML Data Analytics related certifications Qualification B. Tech/B.E. and/or MBA Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304568 Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About the Open Position Join us as Cloud Engineer at Dailoqa , where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as Cloud Engineer you should have experience with: Cloud sourcing, networks, VMs, performance, scaling, availability, storage, security, access management Deep expertise in one or more cloud platforms: AWS, Azure, GCP Strong experience in containerization and orchestration (Docker, Kubernetes, Helm) Familiarity with CI/CD tools: GitHub Actions, Jenkins, Azure DevOps, ArgoCD, etc. Proficiency in scripting languages (Python, Bash, PowerShell) Knowledge of MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML Strong understanding of DevOps principles applied to ML workflows. Key Responsibilities may include: · Design and implement scalable, cost-optimized, and secure infrastructure for AI-driven platforms. · Implement infrastructure as code using tools like Terraform, ARM, or Cloud Formation. · Automate infrastructure provisioning, CI/CD pipelines, and model deployment workflows. · Ensure version control, repeatability, and compliance across all infrastructure components. · Set up monitoring, logging, and alerting frameworks using tools like Prometheus, Grafana, ELK, or Azure Monitor. · Optimize performance and resource utilization of AI workloads including GPU-based training/inference Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e. g. , FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Designation: Cloud Administrator Job Description: Cloud Administrator (AWS) is responsible with installing, supporting and maintaining cloud / on-premise server infrastructure, and planning for and responding to performance and availability of the services. The candidate should also be having working knowledge of Kubernetes. The responsibility would include handling Kubernetes clusters of Linux on AWS. Other duties may include participating in calls, performing quality audit, building the knowledge database based on the unique incidents, client engagement and imparting training team. Systems administrator (Cloud) must demonstrate a blend of technical and interpersonal skills. Technical Exposure: Duties and Responsibilities Answer technical queries (both initial and follow up) via phone, the ticketing system, email, IM Chat or rotational basis Log all issues and resolutions, while working closely with senior desktop engineers and IT management on problem resolution Communicate with users, explain issues and resolutions, update activity or train on new equipment or software Perform Linux Server administration and configuration for performing software upgrades and patches Maintain system security Install, configure and fine tune cloud infrastructure Install and configure virtual cloud instances Perform on-premise to cloud migration Monitor performance and maintain systems according to requirements Troubleshoot incidents, issues and outages Cluster creation, networking and service account management in Kubernetes Node and Pod Allocation (Taints, tolerations and Labels) Role Based Access Control in Kubernetes, Helm Chart and HPA Work on scheduled tasks / activities Ensure security through access controls, backups and firewalls Support cloud servers including security configuration, patching and troubleshooting Upgrade systems with new releases and models Backup monitoring and reporting Develop expertise to train staff on new technologies Build an internal wiki with technical documentation, manuals and IT policies Provides on-call high priority 24/7 technical support as necessary Research questions utilizing a variety of resources, including KB / Standard Operating Procedure, senior level desktop support group, supervisor or management Assist in maintaining an inventory of IT hardware and software assets Provide clients with step-by-step instructions to users Escalating IT issues to next level where necessary and keep user informed Stay updated with system information, changes and updates Ensure to adhere to clients compliance and governance standards and report any non-compliances to the manager Contribute to the technical knowledgebase by adding or editing knowledge articles for consistency and sustainability Participate & contribute in IT team meetings Foster professional relationship with all colleagues by listening, understanding and responding to their needs Promote positive customer service attitude among peers Skills And Experience 2 years of international experience in Configuration, management and automation of cloud environment (AWS / Azure) Additional 3 Years Of Experience Of Linux Experience on Configuration /management of Elastic Load Balancers, auto-scaling, Virtual Private Cloud, Routing, cloud database, IAM, ACM and SSM. Should have knowledge of Virtualization administration/configuration including HA/DR Should apply Networking principles: TCP/IP, Routers, Firewalls, Switches, Load Balancers, VPNs Managing full Cloud Lifecycle, Provisioning, Automation and Security Scripting for administration and automation Setting up and administering multi-tier computer system circumstances Configure and fine-tune cloud infrastructure systems Monitor availability and performance extent Monitor effectively charging and cost optimization strategies Manage disaster recovery and create backups Maintain Data Integrity and access control Debugging Kubernetes Failures( Pod Logs, HPA Error, Load balancer Error, Deployment Failures, Update Deployment/Service/HPA) Kubectl Events and CLI Understanding of replica set and demon set and persistence volume Monitor availability and performance extent Should follow change management process Willing to take ownership of tasks through completion Outstanding analytical and technical problem-solving skills Excellent Interpersonal and communication skills (verbal and written) Excellent organizational, time-management and prioritization skills Should possess 3 years of experience on supporting Linux server on cloud environment Ability and eagerness to learn new products and technologies Ability to follow established policies and procedures Good written and verbal communication skills Ability to multi-task efficiently Qualifications And Certifications Bachelors degree in Computer Science, Information Technology or related discipline AWS Cloud Practitioner AWS Solution Associate (Architect) Red Hat Certified System Administrator / Red Hat Certified Engineer (RHEL 6 or onwards) ITIL Knowledge Experience: 3 - 5 years Salary: 40 - 60 k/month Qualification: bachelors degree in IT or similar Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

We are seeking an experienced Azure DevOps Engineer to manage and optimize our cloud infrastructure, CI/CD pipelines, version control, and platform automation. The ideal candidate will be responsible for ensuring efficient deployments, security compliance, and operational reliability. This role requires collaboration with development, QA, and DevOps teams to enhance software delivery and infrastructure management. Key Responsibilities Infrastructure Management Design and manage Azure-based infrastructure for scalable and resilient applications. Implement and manage Azure Container Apps to support microservices-based architecture. CI/CD Pipelines Build and maintain CI/CD pipelines using GitHub Actions or equivalent tools. Automate deployment workflows to ensure quick and reliable application delivery. Version Control and Collaboration Manage GitHub repositories, branching strategies, and pull request workflows. Ensure repository compliance and enforce best practices for source control. Platform Automation Develop scripts and tooling to automate repetitive tasks and improve efficiency. Use Infrastructure as Code (IaC) tools like Terraform or Bicep for resource provisioning. Monitoring and Optimization Set up monitoring and alerting for platform reliability using Azure Monitor and Application Insights. Analyze performance metrics and implement optimizations for cost and efficiency improvements. Collaboration and Support Work closely with development, DevOps, and QA teams to streamline deployment processes. Troubleshoot and resolve issues in production and non-production environments. GitHub Management Manage GitHub repositories, including permissions, branch policies, and pull request workflows. Implement GitHub Actions for automated testing, builds, and deployments. Enforce security compliance through GitHub Advanced Security features (e.g., secret scanning, Dependabot). Design and implement branching strategies to support collaborative software development. Maintain GitHub templates for issues, pull requests, and contributing guidelines. Monitor repository usage, optimize workflows, and ensure scalability of GitHub services. Operational Support Maintain pipeline health and resolve incidents related to deployment and infrastructure. Address defects, validate certificates, and ensure platform consistency. Resolve issues with offline services, manage private runners, and apply security patches. Monitor page performance using tools like Lighthouse. Manage server maintenance, repository infrastructure, and access control. Pipeline Development Develop reusable workflows for builds, deployments, SonarQube integrations, Jira integrations, release notes, notifications, and reporting. Implement branching and versioning management strategies. Identify pipeline failures and develop automated recovery mechanisms. Customize configurations for various projects (Mobile, Leapfrog, AEM/Hybris). Testing Integration Implement automated testing, feedback loops, and quality gates. Manage SonarQube configurations, rulesets, and runner maintenance. Maintain SonarQube EE deployment in Azure Container Apps. Configure and integrate security tools like Dependabot and Snyk with Jira. Work Collaboration Integration Integrate JIRA for automatic ticket generation, story validation, and release management. Configure Teams for API management, channels, and chat management. Set up email alerting mechanisms. Support IFS/CR process integration. Required Skills & Qualifications Cloud Platforms: Azure (Azure Container Apps, Azure Monitor, Application Insights). CI/CD Tools: GitHub Actions, Terraform, Bicep. Version Control: GitHub repository management, branching strategies, pull request workflows. Security & Compliance: GitHub Advanced Security, Dependabot, Snyk. Automation & Scripting: Terraform, Bicep, Shell scripting. Monitoring & Performance: Azure Monitor, Lighthouse. Testing & Quality Assurance: SonarQube, Automated testing. Collaboration Tools: JIRA, Teams, Email Alerting. Preferred Qualifications Experience in microservices architecture and containerized applications. Strong understanding of DevOps methodologies and best practices. Excellent troubleshooting skills for CI/CD pipelines and infrastructure issues. Skills: devops,dependabot,azure container apps,github actions,application insights,azure,jira,pull request workflows,branching strategies,github advanced security,teams,email alerting,shell scripting,ci,github repository management,terraform,docker,cd,snyk,sonarqube,bicep,azure monitor Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

We are seeking a highly skilled AWS Architect to design, implement, and optimize secure, scalable, and compliant AWS infrastructure solutions across multiple regions. The ideal candidate will have hands-on experience with global and region-specific AWS environments, deep knowledge of cloud architecture best practices, and a strong understanding of infrastructure automation. Design and implement AWS architectures that meet performance, security, scalability, and compliance requirements. Architect and configure VPCs, subnets, routing, firewalls, and networking components in multi-region environments. Manage traffic routing using AWS Route 53, including geolocation and latency-based routing configurations. Implement high availability (HA), disaster recovery (DR), and fault-tolerant designs for business-critical applications. Ensure alignment with security best practices and compliance policies, including IAM roles, encryption, and access controls. Collaborate with cross-functional teams to support cloud adoption, application migration, and system integration. Develop infrastructure-as-code (IaC) templates for consistent and automated provisioning. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Automation Networking Expertise in working with Ansible or demonstrated ability to quickly learn new programming languages Documented skills in translating automation concepts to consistent, reliable scripts using an industry accepted scripting language (Ansible/Python preferred) History of projects involving engineering design work, especially projects involving tailored solutions to meet specific infrastructure automation process needs Able to recognize and identify enterprise affecting issues to inform both automation design and operations Skilled in conceptualizing script structure and design to meet operational needs Extensive development background, particularly for supporting, maintaining and engineering infrastructure technology adjacent solutions Familiarity with SCM and SDLC principles Experience with version control best practices Familiar with both Waterfall and Agile methodologies Experience With Containers And Virtual Environments Preferred Comfortable identifying, verifying, vetting and testing hardware/software requirements for proposed automation and orchestrations solutions Ability to guide and empower infrastructure engineers seeking to explore, identify and expand automated solutions developed within their team Possesses a broad understanding of infrastructure devices, protocols, functionalities and operations. Experience working with infrastructure standards at an enterprise level Ability to lead confidentially and effectively when faced with new or unknown automation challenges Frequently anticipates problems in both the automation and infrastructure spaces and analyzes ways to mitigate the risk Familiarity with risk management and MRA processes preferred Ability to write and execute test plans for Automation testing Quantifies and articulates the business value and impact of advanced technical and non-technical information Comfortable with conducting and understanding data analysis Provides effective production support including accurate problem identification, ticket documentation and customer/vendor dialogue Documents, presents, and executes medium-to large-scale projects with minimal supervision Effectively able to communicate technical data both granularly and at a higher level Previous experience in tracking work efforts through Kanban, Jira, ADO or other story tracking development software Is often consulted by peers and seen as the informal leader on tactical problems Able to self-direct individual work effectively while maintaining cooperative attitude in conjunction with team and organization goals Comfortable managing context switching between multiple efforts Perform orchestration platform provisioning, administration, monitoring and troubleshooting Maintains current professional and technical skills Must Haves Network Automation, Python, Ansible, Restful API's, GIT, ADO Nice to Haves: ServiceNow, Routing/Switching, network fundamentals, IP addressing Remains aware of current and emerging technologies, how they integrate and drive value for Client Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

We are seeking an experienced Azure DevOps Engineer to manage and optimize our cloud infrastructure, CI/CD pipelines, version control, and platform automation. The ideal candidate will be responsible for ensuring efficient deployments, security compliance, and operational reliability. This role requires collaboration with development, QA, and DevOps teams to enhance software delivery and infrastructure management. Key Responsibilities Infrastructure Management Design and manage Azure-based infrastructure for scalable and resilient applications. Implement and manage Azure Container Apps to support microservices-based architecture. CI/CD Pipelines Build and maintain CI/CD pipelines using GitHub Actions or equivalent tools. Automate deployment workflows to ensure quick and reliable application delivery. Version Control and Collaboration Manage GitHub repositories, branching strategies, and pull request workflows. Ensure repository compliance and enforce best practices for source control. Platform Automation Develop scripts and tooling to automate repetitive tasks and improve efficiency. Use Infrastructure as Code (IaC) tools like Terraform or Bicep for resource provisioning. Monitoring and Optimization Set up monitoring and alerting for platform reliability using Azure Monitor and Application Insights. Analyze performance metrics and implement optimizations for cost and efficiency improvements. Collaboration and Support Work closely with development, DevOps, and QA teams to streamline deployment processes. Troubleshoot and resolve issues in production and non-production environments. GitHub Management Manage GitHub repositories, including permissions, branch policies, and pull request workflows. Implement GitHub Actions for automated testing, builds, and deployments. Enforce security compliance through GitHub Advanced Security features (e.g., secret scanning, Dependabot). Design and implement branching strategies to support collaborative software development. Maintain GitHub templates for issues, pull requests, and contributing guidelines. Monitor repository usage, optimize workflows, and ensure scalability of GitHub services. Operational Support Maintain pipeline health and resolve incidents related to deployment and infrastructure. Address defects, validate certificates, and ensure platform consistency. Resolve issues with offline services, manage private runners, and apply security patches. Monitor page performance using tools like Lighthouse. Manage server maintenance, repository infrastructure, and access control. Pipeline Development Develop reusable workflows for builds, deployments, SonarQube integrations, Jira integrations, release notes, notifications, and reporting. Implement branching and versioning management strategies. Identify pipeline failures and develop automated recovery mechanisms. Customize configurations for various projects (Mobile, Leapfrog, AEM/Hybris). Testing Integration Implement automated testing, feedback loops, and quality gates. Manage SonarQube configurations, rulesets, and runner maintenance. Maintain SonarQube EE deployment in Azure Container Apps. Configure and integrate security tools like Dependabot and Snyk with Jira. Work Collaboration Integration Integrate JIRA for automatic ticket generation, story validation, and release management. Configure Teams for API management, channels, and chat management. Set up email alerting mechanisms. Support IFS/CR process integration. Required Skills & Qualifications Cloud Platforms: Azure (Azure Container Apps, Azure Monitor, Application Insights). CI/CD Tools: GitHub Actions, Terraform, Bicep. Version Control: GitHub repository management, branching strategies, pull request workflows. Security & Compliance: GitHub Advanced Security, Dependabot, Snyk. Automation & Scripting: Terraform, Bicep, Shell scripting. Monitoring & Performance: Azure Monitor, Lighthouse. Testing & Quality Assurance: SonarQube, Automated testing. Collaboration Tools: JIRA, Teams, Email Alerting. Preferred Qualifications Experience in microservices architecture and containerized applications. Strong understanding of DevOps methodologies and best practices. Excellent troubleshooting skills for CI/CD pipelines and infrastructure issues. Skills: devops,dependabot,azure container apps,github actions,application insights,azure,jira,pull request workflows,branching strategies,github advanced security,teams,email alerting,shell scripting,ci,github repository management,terraform,docker,cd,snyk,sonarqube,bicep,azure monitor Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

An OpenStack Engineer is a SME level cloud infrastructure specialist responsible for designing, deploying, managing, and troubleshooting complex OpenStack cloud environments. Their role involves deep expertise in OpenStack components, virtualization technologies, and cloud infrastructure automation, ensuring high availability, performance, and security of private or public clouds. We are looking for seasoned operations professionals with a passion for running resilient, secure, and highly performance distributed systems. If you are passionate about the challenge of running infrastructure at a gigantic scale while delivering world-class service delivery for a hyperscaler, Airtel Cloud offers a unique opportunity. Come build and run the future with us! Key Responsibilities: Infrastructure Management Monitor and maintain OpenStack infrastructure including bare metal servers, virtual machines (VMs), and container environments. Assist in provisioning and managing compute resources using OpenStack Nova for virtual machines and bare metal provisioning tools. Support storage management using OpenStack Cinder (block storage), Swift (object storage), and Glance (image service). Maintain networking services with OpenStack Neutron, ensuring proper network connectivity and security. Experience in implementation and maintenance on Apache Tomcat, MySQL and JBoss web service environment OpenStack Component Operations Perform day-to-day administration and health checks of OpenStack core services: Nova: Manage compute instances lifecycle. Neutron: Oversee virtual networking, including subnets, routers, and security groups. Cinder: Handle block storage volumes attachment and detachment. Glance: Manage VM images and snapshots. Swift: Monitor object storage availability and performance. Horizon: Support users with the OpenStack dashboard interface. Ceilometer: Collect and report telemetry and usage data for monitoring. Heat: Assist in orchestration tasks for automated cloud resource deployment. Monitoring and Troubleshooting Monitor cloud infrastructure performance and resource utilization using OpenStack telemetry (Ceilometer). Identify and escalate issues related to compute, storage, or network failures. Provide Tier 1 support by resolving basic incidents and service requests. Collaborate with senior engineers to troubleshoot complex problems and implement fixes. Automation and Scripting Support Assist in developing and maintaining automation scripts for routine cloud management tasks. Support configuration management tools like Ansible, Puppet, or Chef for deployment consistency. Basic understanding of cloud software deployments, with a focus on continuous integration and deployment using GitHub, Jenkins, and Maven Basic knowledge in object Python and C++ object oriented programming (OOP) concepts Help automate provisioning and scaling of resources, including containers and virtual machines. Incident & Problem Management Log and track incidents and service requests using ITSM tools, ensuring timely resolution and communication with stakeholders Document troubleshooting steps and resolutions to contribute to the knowledge base. Participate in root cause analysis for recurring incidents, providing input to senior engineers Change and Configuration Management Follow established change management procedures for implementing approved changes to server configurations, ensuring minimal disruption to services. Document all configuration changes and updates accurately for compliance and audit purposes. Security & Compliance Apply basic security best practices, including regular updates, antivirus checks, and adherence to company security policies. Show more Show less

Posted 1 week ago

Apply

Exploring Provisioning Jobs in India

The provisioning job market in India is currently experiencing growth, with many companies seeking skilled professionals to handle provisioning tasks. Provisioning roles involve setting up, configuring, and managing resources such as servers, networks, and software applications to ensure they are available and functioning properly. If you are considering a career in provisioning, here is some important information to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT industry and have a high demand for provisioning professionals.

Average Salary Range

The average salary range for provisioning professionals in India varies based on experience level. Entry-level professionals can expect to earn between INR 3-5 lakhs per annum, while experienced professionals with 5+ years of experience can earn between INR 8-15 lakhs per annum.

Career Path

In the field of provisioning, a typical career path may include roles such as: - Provisioning Engineer - Senior Provisioning Engineer - Provisioning Manager - Director of Provisioning

As professionals gain experience and expertise, they may progress to higher-level roles with more responsibilities and leadership opportunities.

Related Skills

In addition to provisioning skills, professionals in this field may also be expected to have knowledge of: - Networking fundamentals - Scripting languages (e.g., Python, Bash) - Cloud computing platforms (e.g., AWS, Azure) - Configuration management tools (e.g., Ansible, Chef)

Having a strong foundation in these related skills can enhance a professional's ability to excel in provisioning roles.

Interview Questions

  • What is provisioning and why is it important? (basic)
  • Can you explain the difference between hard provisioning and soft provisioning? (medium)
  • How would you troubleshoot a provisioning issue with a server? (medium)
  • What experience do you have with configuration management tools? (basic)
  • Can you describe a challenging provisioning project you worked on and how you overcame obstacles? (advanced)
  • How do you stay updated on the latest trends and technologies in provisioning? (basic)
  • What steps would you take to automate a provisioning process? (medium)
  • Have you worked with any cloud-based provisioning tools? If so, which ones? (medium)
  • How do you ensure security in provisioning processes? (medium)
  • Can you walk us through your experience with network provisioning? (basic)
  • Explain the concept of infrastructure as code and its relevance to provisioning. (advanced)
  • How do you prioritize multiple provisioning tasks with tight deadlines? (medium)
  • Have you ever had to rollback a provisioning change? If so, how did you handle it? (medium)
  • What monitoring tools do you use to track provisioning performance? (basic)
  • Describe a time when you had to collaborate with other teams for a provisioning project. (medium)
  • How do you handle scalability in provisioning projects? (medium)
  • Can you explain the concept of self-service provisioning? (medium)
  • What are the key metrics you track to measure the success of a provisioning process? (medium)
  • How do you ensure compliance with industry regulations during provisioning activities? (medium)
  • Have you worked on any disaster recovery provisioning plans? If so, describe your experience. (advanced)
  • What role does documentation play in provisioning tasks? (basic)
  • How do you handle conflicts or disagreements with team members during a provisioning project? (medium)
  • Can you provide an example of a successful automation project you worked on related to provisioning? (advanced)
  • How do you approach capacity planning in provisioning projects? (medium)
  • What are your thoughts on the future of provisioning technology and its impact on businesses? (advanced)

Closing Remark

As you prepare for interviews and job applications in the provisioning field, remember to showcase your technical skills, problem-solving abilities, and passion for staying updated on industry trends. With dedication and confidence, you can excel in provisioning roles and contribute to the growing IT industry in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies