Home
Jobs

3291 Iam Jobs - Page 24

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

6 - 9 Lacs

Noida

On-site

GlassDoor logo

NeoXam ( NeoXam Company Profile) is a leading financial software company delivering cutting-edge solutions for data management, portfolio management, and regulatory compliance. With a strong global presence, NeoXam serves over 150 customers in 25 countries, processing more than €25 trillion worth of assets daily and supporting over 10,000 users. Committed to client success, NeoXam provides reliable and scalable solutions that help buy- and sell-side players navigate the evolving financial landscape. Backed by 800+ employees, NeoXam is headquartered in Paris with 20 offices worldwide. About the Role: We are seeking a highly motivated DevOps Engineer to join our team and play a pivotal role in building and maintaining our cloud infrastructure. The ideal candidate will have a strong understanding of DevOps principles and practices, with a focus on AWS, Kubernetes, CI/CD pipelines, Docker, and Terraform. Responsibilities : DevOps & Java Backend Engineer Hands-on experience in developing backend APIs using Java (Spring Boot) and Django, microservices, and deploying secure, production-grade systems. Cloud Platforms: Design, build, and maintain our cloud infrastructure primarily on AWS. Infrastructure as Code (IaC): Develop and manage IaC solutions using tools like Terraform to provision and configure cloud resources on AWS.' Containerization: Implement and manage Docker containers and Kubernetes clusters for efficient application deployment and scaling. CI/CD Pipelines: Develop and maintain automated CI/CD pipelines using tools like Jenkins, Bitbucket CI/CD, or ArgoCD to streamline software delivery. Automation: Automate infrastructure provisioning, configuration management, and application deployment using tools like Terraform and Ansible. Monitoring and Troubleshooting: Implement robust monitoring and alerting systems to proactively identify and resolve issues. Collaboration: Work closely with development teams to understand their needs and provide solutions that align with business objectives. Security: Ensure compliance with security best practices and implement measures to protect our infrastructure and applications. Technical Skills : Programming Languages: Java, Python, JavaScript, Shell Scripting Frameworks & Tools: Django, Spring Boot, REST APIs, Git, Jenkins Cloud & DevOps: Oracle Cloud Infrastructure (OCI), AWS, Terraform, Kubernetes, Docker, Ansible Databases: MySQL, PostgreSQL, Oracle DB, NoSQL (MongoDB) Monitoring & Logging: Grafana, Prometheus, EFK Stack Version Control & CI/CD: Git, GitHub, GitLab, Bitbucket, Jenkins, CI/CD Pipelines Certifications Qualifications : Bachelor’s degree in computer science, Engineering, or a related field. 4+ years of experience in DevOps or a similar role. Strong proficiency in AWS services (EC2, S3, VPC, IAM, etc.). Experience with Kubernetes and container orchestration. Expertise in CI/CD pipelines and tools (Jenkins, Bitbucket CI/CD, ArgoCD). Familiarity with Docker and containerization concepts. Experience with configuration management tools (Terraform, Cloudformation). Scripting skills (Python, Bash). Understanding of networking and security concepts. Bonus Points: Experience with serverless computing platforms (AWS Lambda, AWS Fargate). Knowledge of infrastructure as code (IaC) principles. Experience in maintaining SaaS project. Certifications in AWS, Kubernetes, or DevOps. Why Join Us: Opportunity to work on cutting-edge technologies and projects. Collaborative and supportive team environment. Competitive compensation and benefits package. Opportunities for professional growth and development. If you are a passionate DevOps engineer looking to make a significant impact, we encourage you to apply.

Posted 5 days ago

Apply

8.0 - 10.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Software Engineering Advisor - HIH - Evernorth About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Summary: Data engineer on the Data integration team Job Description & Responsibilities: Work with business and technical leadership to understand requirements. Design to the requirements and document the designs Ability to write product-grade performant code for data extraction, transformations and loading using Spark, Py-Spark Do data modeling as needed for the requirements. Write performant queries using Teradata SQL, Hive SQL and Spark SQL against Teradata and Hive Implementing dev-ops pipelines to deploy code artifacts on to the designated platform/servers like AWS / Azure / GCP. Troubleshooting the issues, providing effective solutions and jobs monitoring in the production environment Participate in sprint planning sessions, refinement/story-grooming sessions, daily scrums, demos and retrospectives. Experience Required: Overall 8-10 years of experience Experience Desired: Strong development experience in Spark, Py-Spark, Shell scripting, Teradata. Strong experience in writing complex and effective SQLs (using Teradata SQL, Hive SQL and Spark SQL) and Stored Procedures Health care domain knowledge is a plus Education and Training Required: Primary Skills: Excellent work experience on Databricks as Data Lake implementations Experience in Agile and working knowledge on DevOps tools (Git, Jenkins, Artifactory) Experience in AWS (S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch) / GCP / Azure Databricks (Delta lake, Notebooks, Pipelines, cluster management, Azure / AWS integration Additional Skills: Experience in Jira and Confluence Exercises considerable creativity, foresight, and judgment in conceiving, planning, and delivering initiatives. Location & Hours of Work (hybrid, Hyderabad ) (11:30am-8:30PM) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 5 days ago

Apply

5.0 - 8.0 years

0 Lacs

India

On-site

Linkedin logo

Responsibilities: Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Qualifications: Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell. Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment. CDK Knowledge in AWS DevOps. Tools : Experience with Terraform and Kubernetes. Show more Show less

Posted 5 days ago

Apply

7.0 - 8.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 8+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Job Title: AI Security Engineer Position Overview As an AI Security Engineer, you will be responsible for integrating AI and machine learning into our security and compliance processes. You will work closely with the Security, Compliance, and Infrastructure teams to design, develop, and implement AI-driven automation to enhance our security posture. Your efforts will help us proactively identify vulnerabilities, automate compliance enforcement, and improve threat detection and response capabilities. Responsibilities AI-Driven Security Automation: Develop and implement AI-powered automation tools to detect, analyze, and mitigate security threats in real-time. Compliance Monitoring & Enforcement: Utilize AI/ML to automate compliance monitoring, ensuring adherence to regulatory and industry standards such as SOC 2, ISO 27001, and GDPR. Threat Intelligence & Anomaly Detection: Leverage AI to analyze network and system activity, identifying abnormal behavior patterns and potential security breaches before they escalate. Continuous Risk Assessment: Develop machine learning models that continuously assess security risks across cloud and on-premise environments, providing real-time insights and recommendations. Identity and Access Management (IAM): Implement AI-based analytics to detect anomalous access patterns and enforce dynamic access control policies. End-to-End Security Integration: Collaborate with Security, DevOps, and Compliance teams to integrate AI solutions into security monitoring, log analysis, and vulnerability management tools. Self-Healing Security Systems: Design and implement AI-driven remediation mechanisms that can automatically patch vulnerabilities and mitigate security risks. Data Protection & Encryption: Apply AI techniques to enhance data protection strategies, detecting unauthorized access and preventing data exfiltration. Security Posture Optimization: Continuously evaluate and refine AI-driven security models to adapt to emerging threats and evolving compliance requirements. Requirements 5+ years of experience in Security Engineering, Compliance, or DevSecOps roles with a focus on automation. Strong understanding of cybersecurity principles, compliance frameworks, and risk management. Hands-on experience in applying AI/ML techniques to security and compliance challenges. Proficiency in Python, with experience developing security automation scripts. Familiarity with cloud security best practices across AWS, GCP, or Azure. Knowledge of AI/ML frameworks like TensorFlow, PyTorch, or Scikit-learn. Experience with infrastructure automation tools (Terraform, Ansible) for security enforcement. Understanding of identity and access management, zero-trust security models, and behavioral analytics. Familiarity with CI/CD security integrations and DevSecOps methodologies. Preferred Qualifications Certifications in Security (CISSP, CEH, OSCP) or Cloud Security (AWS Security Specialty, GCP Professional Security Engineer). Experience with AI-driven security platforms such as Darktrace, Vectra AI, or Exabeam. Knowledge of cryptography, secure coding practices, and application security. Hands-on experience implementing AI-enhanced threat detection systems. Experience in security governance and compliance automation. About Us SingleStore (www.singlestore.com) delivers the cloud-native database with the speed and scale to power the world’s data-intensive applications. With a distributed SQL database that introduces simplicity to your data architecture by unifying transactions and analytics, SingleStore empowers digital leaders to deliver exceptional, real-time data experiences to their customers. SingleStore is venture-backed and headquartered in San Francisco with offices in Sunnyvale, Raleigh, Seattle, Boston, London, Lisbon, Bangalore, Dublin and Kyiv. Consistent with our commitment to diversity & inclusion, we value individuals with the ability to work on diverse teams and with a diverse range of people. What We Offer Medical Insurance with family members covered Death and Accidental insurance coverage Remote opportunity One Long Weekend every month Phone, Internet & Wellness allowance. Opportunity to work in a global team Flexible working hours Show more Show less

Posted 5 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

ob Title: SRE Engineer with GCP cloud (Only Immediate joiner's, No Fix Notice Period) Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Exp in year : 7year+ Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Experience Required: 6–10 years of SRE or infrastructure engineering experience in cloud-native environments. Mandatory: • Cloud: GCP (GKE, Load Balancing, VPN, IAM) • Observability: Prometheus, Grafana, ELK, Datadog • Containers & Orchestration: Kubernetes, Docker • Incident Management: On-call, RCA, SLIs/SLOs • IaC: Terraform, Helm • Incident Tools: PagerDuty, OpsGenie Nice to Have : • GCP Monitoring, Skywalking • Service Mesh, API Gateway • GCP Spanner, MongoDB (basic) Scope: • Drive operational excellence and platform resilience • Reduce MTTR, increase service availability • Own incident and RCA processes Roles and Responsibilities: •Define and measure Service Level Indicators (SLIs), Service Level Objectives (SLOs), and manage error budgets across services. • Lead incident management for critical production issues – drive root cause analysis (RCA) and postmortems. • Create and maintain runbooks and standard operating procedures for high[1]availability services. • Design and implement observability frameworks using ELK, Prometheus, and Grafana; drive telemetry adoption. • Coordinate cross-functional war-room sessions during major incidents and maintain response logs. • Develop and improve automated system recovery, alert suppression, and escalation logic. • Use GCP tools like GKE, Cloud Monitoring, and Cloud Armor to improve performance and security posture. • Collaborate with DevOps and Infrastructure teams to build highly available and scalable systems. • Analyze performance metrics and conduct regular reliability reviews with engineering leads. • Participate in capacity planning, failover testing, and resilience architecture reviews. Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position: DevOps Engineer Experience: 8+ years Location: Hyderabad/Ahmedabad Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a DevOps Engineer, you will support infrastructure provisioning, automation, and continuous deployment pipelines to streamline and scale our development lifecycle. You’ll work closely with engineering teams to maintain a stable, high-performance CI/CD ecosystem and cloud infrastructure on GCP. Experience Required: 4-6 years of hands-on DevOps experience with cloud and containerized deployments. Mandatory: • OS: Linux • Cloud: GCP (VPC, Compute Engine, GKE, GCS, IAM) • CI/CD: Jenkins, GitHub Actions, Bitbucket Pipelines • Containers: Docker, Kubernetes • IaC: Terraform, Helm • Monitoring: Prometheus, Grafana • Version Control: Git • Trivy, Vault, Owasp Nice to Have: • ELK Stack, Trivy, JFrog, Vault • Basic scripting in Python or Bash • Jira, Confluence Scope: • Implement and support CI/CD pipelines • Maintain development, staging, and production environments • Optimize resource utilization and infrastructure costs Roles and Responsibilities: • Assist in developing and maintaining CI/CD pipelines across various environments (dev, staging, prod) using Jenkins, GitHub Actions, or Bitbucket Pipelines. • Collaborate with software developers to ensure proper configuration of build jobs, automated testing, and deployment scripts. • Write and maintain scripts for infrastructure provisioning and automation using Terraform and Helm. • Manage and troubleshoot containerized applications using Docker and Kubernetes on GCP. • Monitor system health and performance using Prometheus and Grafana; raise alerts and participate in issue triage. • Maintain secrets and configurations using Vault and KMS solutions under supervision. • Participate in post-deployment verifications and rollout validation. • Document configuration changes, CI/CD processes, and environment details in Confluence. • Maintain Jira tickets related to DevOps issues and track resolutions effectively. • Provide support in incident handling under guidance from senior team members. Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

TCS has been a great pioneer in feeding the fire of Young Techies like you. We are a global leader in the technology arena and there's nothing that can stop us from growing together. TCS Hiring for Okta Role**: Okta Engineer/ Developer/ Admin/ Specialist Required Technical Skill Set: Okta, SSO, MFA, SCIM, OpenID, OAuth, SAML Desired Experience Range: 4+ years Joining Location: PAN India We are currently planning to do a Walk-In Interview on 21st June 2025 at TCS Chennai Drive Date: 21st June 2025 (Saturday) Venue : TCS Siruseri ATL Building- 1/G1, SIPCOT IT Park Navalur, Siruseri, Tamil Nadu 603103 Job Description Assist with managing Policies, standards and IT Controls for IAM related to SSO/MFA Work with large business programs to provide IAM/SSO/MFA guidance, requirements and help to develop solutions patterns. Expertise with Okta APIs and integrating applications and third party services with Okta Expertise with various OAuth2 Flows supported in Okta, Custom Authorization Servers, SAML2.0 Federations, OpenID Connect, SCIM integrations and legacy based integrations. Design and implement API, external IDP integrations, directory and database integrations and synchronization. Experience in user onboarding, application onboarding and user lifecycle management. Acts independently to establish and implement security and define service integrations with all other IT services such as build teams, asset management, service and incident management Should be skilled in coordinating and communicating with other teams to manage dependencies on other teams and stakeholders. Ability to work under fixed timelines and meet deadlines Work closely with Okta Architects, Okta developers and customer business and application development team to understand current landscape, use cases, gather requirements and plan Okta Setup and Integration activities. Communicating to clients and partners aspects of both the product and the implementation at the technical and/or functional level appropriate for the situation Show more Show less

Posted 5 days ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Experience : 7+ Years Location: Pan India Mandatory skill- Python Developer - AWS 2+ Years of experience Senior backend python developer years of total experience: 7+ years (AWS at least 2+ years) Role Description: The role is part of the Senior Cloud Platform Engineer who will be responsible for designing and developing solutions necessary for cloud adoption and automation for eg:-build libraries, patterns, standards, governance and deploy everything via code (IaC). This role will also require person to have strong hands-on Cloud Components and service with development mind sets and strong understanding of Coding, CI/CD, SDLC & Agile concepts and best practices. Skills Required: - Strong knowledge & hands-on preferably on AWS components and services like Lambda function, SQS, SNS, Step Function, DynamoDB, IAM, S3, API Gateway Strong development experience preferably in AWS CDK (good to know)/ serverless framework and Python programming ECR & ECS (good to know) Ability to write Python unit test cases (PYtest) Individual contributor and able to lead solution and mentor team Reporting relationship: This role will report to Cloud Program Manager Expectation Strong Cloud ecosystem understanding Strong Development mindset Best Practices adopter Quick learner and troubleshooter Team Player and Collaborator Focused and speedy delivery Education: Graduate – Bachelor’s degree (Engineering preferred) AWS Developer Associate certification (good to have) AWS Certified Architect (good to have) If you are interested then directly share your resume with Shipra.Sharma@ltimindtree.com Show more Show less

Posted 5 days ago

Apply

3.0 - 6.0 years

0 Lacs

Nagpur, Maharashtra, India

On-site

Linkedin logo

Key Responsibilities : - Overall 3-6 years experience in network security with at least 3 years in managing PIM/PAM solutions. - Proficiency with management PIM - Experience in working with Windows, Linux, Unix environments. - Hands-on experience in commissioning and Implementation of PIM/PAM solutions and integrating with various management and authentication authorization tools (email, AD, IAM, SIEM) - Experience in automating processes using scripting, configuration (SOAR) tools - Experience in managing policies and exceptions - Experience in packet capture, analysis, and troubleshooting tools - Product knowledge of PIM/PAM solution. - Incident, problem, service request management, change management, configuration management &capacity management of PIM/PAM Setup - Proactively utilize network monitoring tools to isolate events before service degradation occurs - Supporting incident monitoring and incident analysis/response initiatives - Coordinate with users to ensure timely and satisfactory resolution for any trouble tickets, troubleshooting layers 1, 2, and 3 of the OSI Model. - Troubleshooting network, transport, session, presentation and applications layers - Conducting daily performance checks on devices, periodic audits and compliance - Performing immediate troubleshooting as the situation dictates for any network outages as reported by users, sensors, and/or operational personnel - Implement, and maintain network security policy, standards, and procedures. - Deploying and maintaining access and security policies for PIM/PAM solutions. - Maintaining service levels as well as oversight of the day-to-day configuration, administration and monitoring of the network security infrastructure in a 24/7. Show more Show less

Posted 5 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the GCP/Azure Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and Azure platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and Azure services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education And Experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills And Behavioral Competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less

Posted 5 days ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

📈 Experience: 9+ Years 📍 Location: Pune 📢 Immediate to 15 days and are highly encouraged to apply! 🔧 Primary Skills: Data Engineer, Lead, Architect, Python, SQL, Apache Airflow, Apache Spark, AWS (S3, Lambda, Glue) Job Overview We are seeking a highly skilled Data Architect / Data Engineering Lead with over 9 years of experience to drive the architecture and execution of large-scale, cloud-native data solutions. This role demands deep expertise in Python, SQL, Apache Spark, Apache Airflow , and extensive hands-on experience with AWS services. You will lead a team of engineers, design robust data platforms, and ensure scalable, secure, and high-performance data pipelines in a cloud-first environment. Key Responsibilities Data Architecture & Strategy Architect end-to-end data platforms on AWS using services such as S3, Redshift, Glue, EMR, Athena, Lambda, and Step Functions. Design scalable, secure, and reliable data pipelines and storage solutions. Establish data modeling standards, metadata practices, and data governance frameworks. Leadership & Collaboration Lead, mentor, and grow a team of data engineers, ensuring delivery of high-quality, well-documented code. Collaborate with stakeholders across engineering, analytics, and product to align data initiatives with business objectives. Champion best practices in data engineering, including reusability, scalability, and observability. Pipeline & Platform Development Develop and maintain scalable ETL/ELT pipelines using Apache Airflow , Apache Spark , and AWS Glue . Write high-performance data processing code using Python and SQL . Manage data workflows and orchestrate complex dependencies using Airflow and AWS Step Functions. Monitoring, Security & Optimization Ensure data reliability, accuracy, and security across all platforms. Implement monitoring, logging, and alerting for data pipelines using AWS-native and third-party tools. Optimize cost, performance, and scalability of data solutions on AWS. Required Qualifications 9+ years of experience in data engineering or related fields, with at least 2 years in a lead or architect role. Proven experience with: Python and SQL for large-scale data processing. Apache Spark for batch and streaming data. Apache Airflow for workflow orchestration. AWS Cloud Services , including but not limited to: S3, Redshift, EMR, Glue, Athena, Lambda, IAM, CloudWatch Strong understanding of data modeling, distributed systems, and modern data architecture patterns. Excellent leadership, communication, and stakeholder management skills. Preferred Qualifications Experience implementing data platforms using AWS Lakehouse architecture. Familiarity with Docker , Kubernetes , or similar container/orchestration systems. Knowledge of CI/CD and DevOps practices for data engineering. Understanding of data privacy and compliance standards (GDPR, HIPAA, etc.). Show more Show less

Posted 5 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the AWS Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and AWS platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and AWS services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education and experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills and behavioral competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less

Posted 5 days ago

Apply

8.0 - 15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Join our Team About this opportunity: Ericsson is seeking an experienced IAM Engineer with a strong background in Identity Management (IDM) and Public Key Infrastructure (PKI) to join our team in Noida or Bangalore. The ideal candidate will bring 8 to 15 years of hands-on experience in designing, implementing, and managing enterprise IAM solutions, ensuring secure and seamless identity lifecycle management and robust cryptographic security. Key Responsibilities: Design, implement, and support enterprise Identity and Access Management (IAM) solutions, focusing on IDM and PKI components. Manage identity lifecycle processes including provisioning, de-provisioning, authentication, authorization, and access governance. Deploy and maintain PKI infrastructure, including certificate lifecycle management, CA operations, and secure key management. Integrate IDM and PKI systems with various applications, cloud platforms, and network services. Collaborate with security teams to enforce access controls, policies, and compliance requirements. Troubleshoot and resolve IAM and PKI related incidents and performance issues. Develop automation scripts and tools to optimize IAM and PKI processes. Participate in security audits and assessments related to IAM and PKI. Document architecture, configurations, and operational procedures. Stay updated with emerging IAM and PKI technologies, trends, and best practices. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, Cybersecurity, or related field. 8 to 15 years of experience in Identity and Access Management engineering roles. Strong hands-on experience with IDM platforms such as SailPoint, Oracle Identity Manager, IBM Security Identity Manager, or similar. Expertise in PKI technologies including CA management, certificate issuance, revocation, and integration with applications. Experience with directory services (LDAP, Active Directory) and federation technologies (SAML, OAuth, OpenID Connect). Proficiency in scripting languages (Python, Shell, PowerShell) for automation. Knowledge of security standards and compliance frameworks (ISO 27001, NIST, GDPR). Strong troubleshooting, problem-solving, and communication skills. Ability to work collaboratively in cross-functional and global teams. Preferred Qualifications: Certifications such as CISSP, CISA, CISM, or relevant IAM/PKI certifications. Experience in telecom or large-scale enterprise environments. Familiarity with cloud IAM solutions (Azure AD, AWS IAM) and hybrid identity architectures. Exposure to DevOps practices and CI/CD pipelines related to IAM deployments. Show more Show less

Posted 5 days ago

Apply

8.0 - 15.0 years

0 Lacs

Maharashtra, India

On-site

Linkedin logo

Pan India Experience : 8-15 years Mandatory skill - Amazon connect, AWS Lambda Amazon Web service, Amazon Connect, Service Cloud Voice Architect with 8 -15 years’ experience. Experience with Amazon Connect (Contact flows, queues, routing profile) Integration experience with Salesforce (Service Cloud Voice) Hands-on with AWS services AWS Lambda (for backend logic) Programming language Java or Python Amazon Lex (Voice/chat bots) Amazon Kinesis (for call analytics and streaming) Amazon S3 (Storage) Amazon DynamoDB CloudWatch IAM Lead the end-to-end implementation and configuration of Amazon Connect Proficiency in programming scripting Java Script, Nodejs Familiarity with CRM integrations especially Salesforce Service Cloud Voice. Experience to design develop and implement scalable and reliable cloud based contact center solutions using Amazon Connect and AWS ecosystem services. Design and deployments of contact center setup using CI/CD pipelines or Terraform. Design and deployment experience in REST APIs and integration frameworks. Experience of telephony concepts SIP DID ACD IVR CTI AWS Certified Solutions Architect or Amazon Connect certification if you are interested then please share your resume directly with Shipra.Sharma@ltimindtree.com Show more Show less

Posted 5 days ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

OSP India, now part of one.O. OSP India - Hyderabad Private Limited takes a significant step forward in its evolution by becoming part of Otto Group one.O, the new central, high-performance partner for strategy consulting and technology for the Otto Group. This strengthens our mission to deliver innovative IT solutions for commerce and logistics, combining experience, technology, and a global vision to lead the digital future. OSP India’s name transition OSP India will adopt the name Otto Group one.O in the future, following our headquarters' rebranding. We want to assure you that this brand name change will not affect your role, job security, or our company culture. This transition aligns us with our global teams in Germany, Spain, and Taiwan and enhances our collaboration moving forward Job Overview: We are seeking a skilled and experienced Senior Python Developer with strong expertise in Google Cloud Platform (GCP), data analytics, and infrastructure automation using Terraform. The candidate will play a key role in building scalable data solutions, implementing secure cloud services, and driving actionable insights through analytics Requirements Design, develop, and maintain scalable Python applications for data processing and analytics. Utilize GCP services including IAM, BigQuery, Cloud Storage, Pub/Sub, and others. Build and maintain robust data pipelines and ETL workflows. Collaborate with analytics teams to transform data into valuable business insights. Develop and manage infrastructure using Terraform (IaC). Ensure data quality, integrity, and governance across all systems. Participate in architectural decisions, code reviews, and agile ceremonies. Required Skills and Experience: 5 to 10 years of hands-on experience with Python development. Proficient in the Pandas library for data manipulation and analysis. Strong working knowledge of GCP (especially IAM, BigQuery, Cloud Functions, Cloud Storage). Experience in designing and deploying data analytics and ETL pipelines. Proficiency with Terraform and Infrastructure as Code practices. Strong analytical skills and ability to derive insights from large datasets. Understanding of best practices around data quality, security, and compliance Preferred Qualifications: Experience with MongoDB is an advantage. GCP Certification (e.g., Professional Data Engineer or Cloud Architect). Familiarity with Docker and Kubernetes. Exposure to CI/CD tools and DevOps practices. Strong collaboration and communication skills. Benefits Flexible Working Hours: Support for work-life balance through adaptable scheduling. Comprehensive Medical Insurance: Coverage for employees and families, ensuring access to quality healthcare. Hybrid Work Model: Blend of in-office collaboration and remote work opportunities, with four days a week in the office. Show more Show less

Posted 5 days ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

OSP India, now part of one.O. OSP India - Hyderabad Private Limited takes a significant step forward in its evolution by becoming part of Otto Group one.O, the new central, high-performance partner for strategy consulting and technology for the Otto Group. This strengthens our mission to deliver innovative IT solutions for commerce and logistics, combining experience, technology, and a global vision to lead the digital future. OSP India’s name transition OSP India will adopt the name Otto Group one.O in the future, following our headquarters' rebranding. We want to assure you that this brand name change will not affect your role, job security, or our company culture. This transition aligns us with our global teams in Germany, Spain, and Taiwan and enhances our collaboration moving forward Job Overview:We are seeking a skilled and experienced Senior Python Developer with strong expertise in Google Cloud Platform (GCP), data analytics, and infrastructure automation using Terraform. The candidate will play a key role in building scalable data solutions, implementing secure cloud services, and driving actionable insights through analytics. Requirements Design, develop, and maintain scalable Python applications for data processing and analytics. Utilize GCP services including IAM, BigQuery, Cloud Storage, Pub/Sub, and others. Build and maintain robust data pipelines and ETL workflows. Collaborate with analytics teams to transform data into valuable business insights. Develop and manage infrastructure using Terraform (IaC). Ensure data quality, integrity, and governance across all systems. Participate in architectural decisions, code reviews, and agile ceremonies. Required Skills and Experience: 5 to 10 years of hands-on experience with Python development. Proficient in the Pandas library for data manipulation and analysis. Strong working knowledge of GCP (especially IAM, BigQuery, Cloud Functions, Cloud Storage). Experience in designing and deploying data analytics and ETL pipelines. Proficiency with Terraform and Infrastructure as Code practices. Strong analytical skills and ability to derive insights from large datasets. Understanding of best practices around data quality, security, and compliance. Preferred Qualifications: Experience with MongoDB is an advantage. GCP Certification (e.g., Professional Data Engineer or Cloud Architect). Familiarity with Docker and Kubernetes. Exposure to CI/CD tools and DevOps practices. Strong collaboration and communication skills. Benefits Flexible Working Hours: Support for work-life balance through adaptable scheduling. Comprehensive Medical Insurance: Coverage for employees and families, ensuring access to quality healthcare. Hybrid Work Model: Blend of in-office collaboration and remote work opportunities, with four days a week in the office. Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Required Information Details 1 Role Linux TSR (Technical Support Resident engineer) 2 Required Technical Skill Set L2 Linux, L1-L2 DevOps, CloudOps 4 Desired Experience Range 3 to 5 years 5 Location of Requirement Chennai, Ahmedabad (Gandhi Nagar) Desired Competencies (Technical/Behavioral Competency) Must-Have (Ideally should be at least 3 years of hands-on with Linux) · Compute: Demonstrates a deep understanding of compute concepts, including virtualization, operating systems, system administration, performance, networking, and troubleshooting. · Web Technologies: Experience with web technologies and protocols (HTTP/HTTPS, REST APIs, SSL/TLS) and experience troubleshooting web application issues. · Operating Systems: Strong proficiency in Linux (e.g., RHEL, CentOS, Ubuntu) system administration. Experience with Windows Server administration is a plus. · GCP Proficiency: Experience with GCP core services, particularly Compute Engine, Networking (VPC, Subnets, Firewalls, Load Balancing), Storage (Cloud Storage, Persistent Disk), and related services within the Compute domain. · Security: Solid understanding of security best practices for securing compute resources in cloud environments, including IAM implementation, access control, vulnerability management, and protection against unauthorized access and data exfiltration. · Monitoring and Logging: Experience with monitoring tools for troubleshooting and performance analysis. · Scripting and Automation: Proficiency in scripting (e.g., Bash, Python, PowerShell) for system administration, automation, and API interaction. Experience with automation tools (e.g., Terraform, Ansible, Jenkins, Cloud Build) is essential · Networking: Solid understanding of networking concepts and protocols (TCP/IP, DNS, BGP, routing, load balancing) and experience troubleshooting network issues. · Problem-Solving Skills: Excellent analytical and problem-solving skills with the ability to identify and resolve complex technical issues. · Communication Skills: Strong communication and collaboration skills with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Good-to-Have ● Minimum of 3+ years’ experience in implementing both on-premises and cloud-based infrastructure ● Certifications related to Linux/ Cloud Ops, preferably Associate Cloud Engineer Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

We're looking for an Lead AI Engineer who thrives in fast-paced, early-stage product environments. You'll play a critical role in designing, implementing, and evolving our AI-powered infrastructure and DevOps automation platform. This includes leveraging leading LLMs like Claude or GPT to build agents for Infrastructure-as-Code (IaC), CI/CD orchestration, and Ops workflows while also enabling fine-tuned models with a focus on security, cost optimization, and reliability. This is a hands-on, cross-functional role where you will collaborate closely with backend engineers, DevOps experts, and product leadership. Responsibilities Architect and implement intelligent infrastructure agents using Claude/GPT APIs, with support for prompting, function calling, and fine-tuning. Design and develop AI-enabled workflows for provisioning (e. g., EKS, VPC, IAM), CI/CD pipelines, and operational playbooks. Build connectors to integrate LLM outputs with APIs, CLI tools, Terraform/OpenTofu providers, and observability systems. Work with vector stores (e. g., Milvus, Pinecone) and embedding pipelines to enable contextual memory and tool selection for agents. Collaborate on custom model tuning with secure guardrails and agent sandboxing mechanisms. Develop secure and modular APIs to allow frontend orchestration and runtime control of AI agents. Contribute to code reviews, shared libraries, and internal tooling to accelerate AI-driven development. Requirements 5+ years of experience in backend/AI software development with strong fundamentals in microservices and APIs. Deep experience using OpenAI/Anthropic APIs (e. g., GPT-4 Claude), prompt engineering, and advanced function calling mechanisms. Solid experience with Golang, Python, or Node.js for agent workflows, API services, or tooling. Understanding of LLM deployment strategies, prompt lifecycle, and AI-agent chaining concepts. Experience integrating with REST/gRPC APIs, message queues, and asynchronous job processing. Familiarity with vector stores (e. g., Milvus, FAISS), embeddings, and search optimization. Hands-on experience with secure AI workflows managing API tokens, prompt injection defense, and logging. Exposure to containerization (Docker), GitOps (e. g., GitHub Actions), and infrastructure systems. Nice-to-Have Familiarity with OpenTofu/Terraform, Kubernetes, and IaC principles. Experience fine-tuning models using open-source tools (e. g., LoRA, Hugging Face Transformers). Prior work on DevOps tooling, observability pipelines (OpenTelemetry, Prometheus), or infra agents. Interest in building low-code interfaces for orchestrating AI-driven infrastructure tasks. Why Join Ops0 You'll be part of a ground-floor team building a next-gen DevOps automation platform powered by AI agents. If you enjoy blending AI with infrastructure, architecting intelligent workflows, and creating real impact with hands-on ownership, this role is for you. Show more Show less

Posted 5 days ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Principal Security Engineer + Full Time + **TEAM** Technology + **LOCATION** Gurgaon (https://maps.google.com/maps?q=Gurgaon&zoom=14&size=512×512&maptype=roadmap&sensor=false) + **EXPERIENCE** 15 + **POSTED** 2 days ago REA India is a part of REA Group Ltd. of Australia (ASX: REA) (“REA Group”). It is the country’s leading full stack real estate technology platform that owns Housing.com and PropTiger.com. In December 2020, REA Group acquired a controlling stake in REA India. REA Group, headquartered in Melbourne, Australia, is a multinational digital advertising business specialising in property. It operates Australia’s leading residential and commercial property websites, realestate.com.au and realcommercial.com.au and owns leading portals in Hong Kong (squarefoot.com.hk) and China (myfun.com). REA Group also holds a significant minorityshareholding in Move, Inc., operator of realtor.com in the US, and the PropertyGuru Group,operator of leading property sites in Malaysia, Singapore, Thailand, Vietnam and Indonesia. REA India is the only player in India that offers a full range of services in the real estate space, assisting consumers through their entire home seeking journey all the way from initial search and discovery to financing to the final step of transaction closure. It offers advertising and listings products to real estate developers, agents & homeowners, exclusive sales and marketing solutions to builders, data and content services, and personalized search, virtual viewing, site visits, negotiations, home loans and post- sales services to consumers for both buying and renting. With a 1600+ strong team, REA India has a national presence with 25+ offices across India with its corporate office located in Gurugram, Haryana. Housing.com Founded in 2012 and acquired by REA India in 2017, Housing.com is India’s most innovative real estate advertising platform for homeowners, landlords, developers, and real estate brokers. The company offers listings for new homes, resale homes, rentals, plots and co-living spaces in India. Backed by strong research and analytics, the company’s experts provide comprehensive real estate services that cover advertising and marketing, sales solutions for real estate developers, personalized search, virtual viewing, AR&VR content, home loans, end-to-end transaction services, and post-transaction services to consumers for both buying and renting. PropTiger.com PropTiger.com is among India’s leading digital real estate advisory firm offering a one-stop platform for buying residential real estate. Founded in 2011 with the goal to help people buy their dream homes, PropTiger.com leverages the power of information and the organisation’s deep rooted understanding of the real estate sector to bring simplicity, transparency and trust in the home buying process. PropTiger.com helps home-buyers through the entire home-buying process through a mix of technology-enabled tools as well as on-ground support. The company offers researched information about various localities and properties and provides guidance on matters pertaining to legal paperwork and loan assistance to successfully fulfil a transaction. Our Vision Changing the way India experiences property. Our Mission To be the first choice of our consumers and partners in discovering, renting, buying, selling, financing a home, and digitally enabling them throughout their journey. We do that with data, design, technology, and above all, the passion of our people while delivering value to our shareholders. Our Culture Culture forms the core of our foundation and our effort towards creating an engaging workplace that has resulted in REA India being ranked 5th among the coveted list of India’s Best 100 Companies to Work For in 2024 by the Great Place to Work Institute®. REA India was also ranked among Top 5 workplaces list in 2023, the Top 25 workplaces list in 2022 and 2021, and the Top 50 workplaces list in 2019. In addition, REA India was also recognized as Best Workplace™ in Building a Culture of Innovation by All in 2024 & 2023 and India’s Best Workplaces™ in Retail (e-commerce category) for the fourth time in 2024. REA India is ranked 4th among Best Workplaces in Asia in 2023 and was ranked 55th in 2022, & 48th in 2021 apart from being recognized as Top 50 Best Workplaces™ for Women in India in 2023 and 2021. REA India is also recognized as one of India’s Top 50 Best Workplaces for Millennials in 2023 by Great Place to Work®. At REA India, we believe in creating a home for our people, where they feel a sense of belonging and purpose. By fostering a culture of inclusion and continuous learning and growth, every team member has the opportunity to thrive, embrace the spirit of being part of a global family, while contributing to revolutionize the way India experiences property. When you come to REA India, you truly COME HOME! REA India (Housing.com, PropTiger.com) is an equal opportunity employer and welcomes all qualified individuals to apply for employment. We are committed to creating an environment that is free from discrimination, harassment, and any other form of unlawful behavior. We value diversity and inclusion and do not discriminate against our people or applicants for employment based on age, color, gender, marital status, caste, religion, race, ethnic group, nationality, religious or political conviction, sexual orientation, gender identity, pregnancy, family responsibility, or disability or any other legally protected status. We firmly strive to eliminate any barriers that may impede equal opportunities while also recognizing that specific job roles may require appointees to possess the necessary qualifications, skills, abilities to perform essential functions of the position effectively. What does this role hold for you…?? We are looking for a strategic and experienced leader to head our Governance, Risk & Compliance (GRC) and Security Processes functions. The right candidate will bring deep knowledge in information security frameworks, regulatory compliance, and security operations, while driving risk-aware decision-making across the organization. You will ensure compliance with standards like ISO 27001, SOC 2, PCI DSS, and the DPDP Act, while enhancing our security maturity and operational effectiveness. Key Responsibilities : Leadership & Strategy Lead the enterprise GRC & Security Processes roadmap across business units. Align security and risk programs with business objectives. Present risk posture and audit outcomes to CXOs and Board Committees. Own the Enterprise Risk Register and Compliance Dashboard. Compliance Risk Management Ensure compliance with: – ISO 27001 – SOC 2 – PCI DSS – DPDP Act (India) & other privacy regulations Conduct Privacy Impact Assessments and breach response handling Implement automated audit/compliance tracking tools. Information Security Governance Define and enforce enterprise security policies, controls, and standards. Lead ISMS implementation and continuous improvement initiatives. Oversee internal audits, external certifications, and risk assessments. Security Processes Establish and mature security operations processes: – Vulnerability Management – Patch Management – IAM / PAM – SIEM / SOC Operations – Data Loss Prevention (DLP) Set and monitor security KPIs, SLAs, and process automation goals. Drive secure-by-design and DevSecOps practices in collaboration with IT and DevOps. Regulatory Reporting Ensure timely reporting of incidents to CERT-In and relevant authorities. Maintain and test breach notification and regulatory disclosure protocols. Training & Awareness Design and roll out security and compliance training programs. Collaborate with HR and leaders to tailor content across employee levels. Vendor Risk & SLA Oversight Lead Third-Party Risk Management (TPRM) initiatives. Monitor vendor performance against security SLAs and compliance clauses. Budget & Program Oversight Own GRC & Cybersecurity budgets. Identify and deploy tools to automate and scale compliance operations. Apply if you have… Bachelor’s in Engineering, Cybersecurity, IT, or related field. 15+ years in GRC, InfoSec, or Risk leadership roles. Deep knowledge of: – ISO 27001, SOC 2, PCI DSS – DPDP Act and statutory audit requirements – Security governance and risk quantification Strong communication and executive stakeholder management skills. Preferred Certifications CISM, CIPM, or CRISC PMP or equivalent project/program management certification ITIL for service and process governance Know more about us… Visit our career websites at https://careers.housing.com/ & https://careers.proptiger.com/and LinkedIn page to know more about our company culture, and gain insights into what makes us a Great Place To Work. Want to dive into what we do? Visit our main websites for an in-depth look at www.housing.com & www.proptiger.com.

Posted 5 days ago

Apply

10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title - Identity & Access Management ( IAM Architect) Position type - Full Time Work Location - Bangalore/Delhi NCR Working style- Hybrid Required years of experience - Minimum 10+ years of relevant experience AON IS IN THE BUSINESS OF BETTER DECISIONS At Aon, we shape decisions for the better to protect and enrich the lives of people around the world. As an organization, we are united through trust as one inclusive team and we are passionate about helping our colleagues and clients succeed. General Description Of Role The Counter Threat Engineering Team under the Global Cybersecurity Services organization is seeking a strategic-minded Identity and Access Management (IAM) Architect with deep technical knowledge of IAM technologies and concepts. This role is critical in designing, implementing, and managing IAM solutions to ensure the security and integrity of Aon's systems and data. Job Responsibilities & What The Day Will Look Like Work on engagements that span the entire lifecycle of a transaction, from technology and product due-diligence, IT due diligence, technology value creation, carve-outs and integrations. Lead engagements and client interactions to effectively uncover material transaction risks and improvement opportunities. Lead corporate carve-outs and integrations including defining, managing and executing separation blueprints, integration roadmaps, day-1 readiness plans, cutover plans, and integration plans. Lead business development initiatives including participating in proposal responses, pursuit meetings, and identifying opportunities to expand client relationships. Engage in practice development initiatives working with the rest of the Digital M&A team to improve existing propositions, methodologies and processes. Lead project and pipeline management including tracking of leads, opportunities, commercials, contracting, and invoicing. How We Support Our Colleagues In addition to our comprehensive benefits package, we encourage an inclusive workforce. Plus, our agile environment allows you to manage your wellbeing and work/life balance, ensuring you can be your best self at Aon. Furthermore, all colleagues enjoy two “Global Wellbeing Days” each year, encouraging you to take time to focus on yourself. We offer a variety of working style solutions for our colleagues as well. Our continuous learning culture inspires and equips you to learn, share and grow, helping you achieve your fullest potential. As a result, at Aon, you are more connected, more relevant, and more valued. Aon values an innovative and inclusive workplace where all colleagues feel empowered to be their authentic selves. Aon is proud to be an equal opportunity workplace. Aon provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, creed, sex, sexual orientation, gender identity, national origin, age, disability, veteran, marital, domestic partner status, or other legally protected status. We welcome applications from all and provide individuals with disabilities with reasonable adjustments to participate in the job application, interview process and to perform essential job functions once onboard. If you would like to learn more about the reasonable accommodations we provide, email ReasonableAccommodations@Aon.com 2561762 Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Job summary: We are seeking a highly experienced DevOps Engineer with a deep focus on AWS and Infrastructure as Code using Terraform . This role requires a self-motivated individual who thrives in a fast-paced, highly technical environment. The ideal candidate is someone who can design, implement, and manage scalable cloud infrastructure while also taking full ownership of projects from start to finish. Key Responsibilities Design, implement, and manage infrastructure in AWS using Terraform Architect and maintain secure, scalable AWS environments including IAM, EC2, RDS, S3, EKS, and VPCs Manage Kubernetes clusters and containerized applications using Docker and EKS Support and maintain serverless applications using AWS Lambda and integrate with other AWS services like S3 Implement CI/CD pipelines and ensure infrastructure reliability and observability Administer Linux systems and create automation scripts using shell scripting Develop and manage database infrastructure, especially PostgreSQL, including schema migrations Troubleshoot and resolve complex infrastructure and networking issues Collaborate with cross-functional teams to deliver secure and robust DevOps solutions Requirements Technical Skills 2 years of experience in DevOps engineering and AWS Expert-level proficiency in Terraform and infrastructure as code best practices Deep understanding of the AWS ecosystem: IAM roles and permissions Network design and security EC2, RDS, S3, and EKS Strong hands-on experience with Docker and Kubernetes Experience building and managing serverless architectures (Lambda, API Gateway, S3) Proficiency with Linux, shell scripting, and common DevOps tools Familiarity with HTTP(S), DNS, web server configuration, and caching mechanisms Solid experience with PostgreSQL and data migration strategies Soft Skills Strong verbal and written communication skills across technical and non-technical stakeholders Demonstrated ability to take ownership and drive initiatives independently Proactive, self-directed, and highly organized Effective problem-solving skills and a detail-oriented mindset Preferred Qualifications Bachelor’s degree in Computer Science, Information Security, or a related field, or equivalent practical experience. AWS certifications (e.g., AWS Certified DevOps Engineer, Solutions Architect) Experience with monitoring and logging tools (e.g., CloudWatch, Prometheus, Grafana) Familiarity with Agile/Scrum methodologies Certifications or experience with ITIL or ISO 20000 frameworks are advantageous. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#68B54C;border-color:#68B54C;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Borivali, Maharashtra, India

On-site

Linkedin logo

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications 8+ years’ experience in Java/J2EE and 2+ years on any Cloud Platform; Bachelor’s in IT, CS, Math, Physics, or related field. Strong skills in Java, J2EE, REST, SOAP, Web Services, and deploying on servers like WebLogic, WebSphere, Tomcat, JBoss. Proficient in UI development using JavaScript/TypeScript frameworks such as Angular and React. Experienced in building scalable business software with core AWS services and engaging with customers on best practices and project management. Preferred Qualifications AWS experience preferred, with proficiency in EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, and AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer). Strong scripting and automation skills (Terraform, Python) and knowledge of security/compliance standards (HIPAA, GDPR). Strong communication skills, able to explain technical concepts to both technical and non-technical audiences. Experience in designing, developing, and deploying scalable business software using AWS services like Lambda, Elastic Beanstalk, and Kubernetes. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Maharashtra Job ID: A3008596 Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Borivali, Maharashtra, India

On-site

Linkedin logo

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications 8+ years’ experience in Java/J2EE and 2+ years on any Cloud Platform; Bachelor’s in IT, CS, Math, Physics, or related field. Strong skills in Java, J2EE, REST, SOAP, Web Services, and deploying on servers like WebLogic, WebSphere, Tomcat, JBoss. Proficient in UI development using JavaScript/TypeScript frameworks such as Angular and React. Experienced in building scalable business software with core AWS services and engaging with customers on best practices and project management. Preferred Qualifications AWS experience preferred, with proficiency in EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, and AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer). Strong scripting and automation skills (Terraform, Python) and knowledge of security/compliance standards (HIPAA, GDPR). Strong communication skills, able to explain technical concepts to both technical and non-technical audiences. Experience in designing, developing, and deploying scalable business software using AWS services like Lambda, Elastic Beanstalk, and Kubernetes. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Maharashtra Job ID: A3008613 Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Borivali, Maharashtra, India

On-site

Linkedin logo

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications 8+ years’ experience in Java/J2EE and 2+ years on any Cloud Platform; Bachelor’s in IT, CS, Math, Physics, or related field. Strong skills in Java, J2EE, REST, SOAP, Web Services, and deploying on servers like WebLogic, WebSphere, Tomcat, JBoss. Proficient in UI development using JavaScript/TypeScript frameworks such as Angular and React. Experienced in building scalable business software with core AWS services and engaging with customers on best practices and project management. Preferred Qualifications AWS experience preferred, with proficiency in EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, and AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer). Strong scripting and automation skills (Terraform, Python) and knowledge of security/compliance standards (HIPAA, GDPR). Strong communication skills, able to explain technical concepts to both technical and non-technical audiences. Experience in designing, developing, and deploying scalable business software using AWS services like Lambda, Elastic Beanstalk, and Kubernetes. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Maharashtra Job ID: A3008631 Show more Show less

Posted 5 days ago

Apply

Exploring IAM Jobs in India

India has seen a significant rise in the demand for Identity and Access Management (IAM) professionals in recent years. Companies across various industries are actively hiring for roles related to IAM to ensure the security and efficiency of their systems. If you are a job seeker looking to explore opportunities in IAM in India, this guide will provide you with valuable insights into the job market, salary range, career path, related skills, and interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Chennai

These cities are known for their thriving tech industries and have a high demand for IAM professionals.

Average Salary Range

The average salary range for IAM professionals in India varies based on experience levels. Entry-level positions typically start at around INR 4-6 lakhs per year, while experienced professionals can earn upwards of INR 12-18 lakhs per year.

Career Path

In the field of IAM, a typical career path may include roles such as IAM Analyst, IAM Engineer, IAM Architect, and IAM Manager. As professionals gain experience and expertise, they may progress from a Junior IAM Analyst to a Senior IAM Engineer and eventually to a Tech Lead or IAM Architect.

Related Skills

Apart from IAM expertise, professionals in this field are often expected to have knowledge of cybersecurity, network security, cloud security, and understanding of compliance regulations such as GDPR and HIPAA.

Interview Questions

  • What is IAM and why is it important for organizations? (basic)
  • Explain the difference between authentication and authorization. (basic)
  • What are the different types of authentication factors? (medium)
  • How do you ensure secure IAM practices in a cloud environment? (medium)
  • What role does SAML play in IAM? (medium)
  • Explain the concept of least privilege access. (medium)
  • How do you handle IAM challenges in a multi-cloud environment? (advanced)
  • Describe the process of role-based access control (RBAC). (medium)
  • What is the purpose of a Privileged Access Management (PAM) solution? (medium)
  • How do you monitor and audit IAM activities in an organization? (medium)
  • Explain the difference between single sign-on (SSO) and multi-factor authentication (MFA). (basic)
  • How would you handle a security breach related to IAM in an organization? (advanced)
  • What are the benefits of implementing a Zero Trust security model for IAM? (advanced)
  • How do you stay updated with the latest trends and technologies in IAM? (basic)
  • Describe the process of user provisioning and deprovisioning in IAM. (medium)
  • How do you ensure compliance with regulatory requirements in IAM implementations? (medium)
  • What are the common challenges faced in implementing IAM solutions in organizations? (medium)
  • How do you prioritize IAM projects based on business needs and risks? (advanced)
  • Explain the concept of Just-In-Time (JIT) provisioning in IAM. (medium)
  • How do you handle identity federation in a hybrid IT environment? (advanced)
  • What are the key components of an IAM framework? (basic)
  • How would you design an IAM solution for a large enterprise with diverse user groups? (advanced)
  • Describe a scenario where you had to troubleshoot an IAM issue and how you resolved it. (advanced)
  • What are the best practices for securing IAM infrastructure from insider threats? (advanced)
  • How do you ensure scalability and flexibility in an IAM solution? (medium)

Closing Remark

As you prepare for IAM job opportunities in India, remember to showcase your expertise, stay updated with industry trends, and practice answering common interview questions. With the right skills and preparation, you can confidently pursue a rewarding career in IAM in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies