Jobs
Interviews

7294 Iam Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

29 Lacs

Hyderābād

On-site

Requirement: 1. Cloud: (Mandatory): Proven technical experience with AWS or Azure, scripting, Migration and automation Hands-on knowledge of services and implementation such as Landing Zone, Centralized Networking (AWS Transit Gateway / Azure Virtual WAN), Serverless (AWS Lambda / Azure Functions), EC2 / Virtual Machines, S3 / Blob Storage, VPC / Virtual Network, IAM, SCP/Azure Policies, Monitoring(CloudWatch / Azure Monitor), SecOps, FinOps, etc. Experience with migration strategies and tools such as AWS MGN, Database Migration Services, Azure Migrate. Experience in scripting languages such as Python, Bash, Ruby, Groovy, Java, JavaScript 2. Automation (Mandatory): Hands-on experience with Infrastructure as Code automation (IaC) and Configuration Management tools such as: Terraform, CloudFormation, Azure ARM, Bicep, Ansible, Chef, or Puppet 3. CI/CD (Mandatory): Hands-on experience in setting up or developing CI/CD pipelines using any of the tools such as (Not Limited To): GitHub Actions, GitLab CI, Azure DevOps, Jenkins, AWS CodePipeline 4. Containers & Orchestration (Good to have): Hands-on experience in provisioning and managing containers and orchestration solutions such as: Docker & Docker Swarm Kubernetes (Private\Public Cloud platforms) OpenShift Helm Charts Certification Expectations 1. Cloud: Certification (Mandatory, any of): AWS Certified SysOps Administrator – Associate AWS Certified Solutions Architect – Associate AWS Certified Developer – Associate Any AWS Professional/Specialty certification(s) 2. Automation: (Optional, any of): RedHat Certified Specialist in Ansible Automation HashiCorp Terraform Certified Associate 3. CI-CD: (Optional) GitLab Certified CI/CD Associate GitHub Actions Certification 4. Containers & Orchestration (Optional, any of): CKA (Certified Kubernetes Administrator) RedHat Certified Specialist in OpenShift Administration Responsibilities: Lead architecture and design discussions with architects and clients. Understanding of technology best practices and AWS frameworks such as “Well- Architected Framework” Implementing solutions with an emphasis on Cloud Security, Cost Optimization, and automation Manage customer engagement and Lead teams to deliver high-quality solutions on time Identify work opportunities and collaborate with leadership to grow accounts Own project delivery to ensure successful outcomes and positive customer experiences. Ability to initiate proactive meetings with Leads and extended teams to highlight any gaps/delays or other challenges. Subject Matter Expert in technology. Ability to train\mentor the team in functional and technical skills. Ability to decide and provide adequate help on the career progression of people. Support to the application team – Work with application development teams to design, implement and where necessary, automate infrastructure on cloud platforms Continuous improvement - Certain engagements will require you to support and maintain existing cloud environments with an emphasis on continuously innovating through automation and enhancing stability/availability through monitoring and improving the security posture Drive internal practice development initiatives to promote growth and innovation within the team. Contribute to internal assets such as technical documentation, blogs, and reusable code components. Job Types: Full-time, Permanent Pay: Up to ₹2,900,000.00 per year Experience: total: 6 years (Required) Work Location: In person

Posted 6 days ago

Apply

4.0 - 11.0 years

1 - 3 Lacs

Hyderābād

On-site

PI Gateway Experience. Working shift -> 2 PM IST to 10:30 PM IST, 4 to 11 years and of experience. Job Responsibilities -> Engineer will develop the services on Broadcom API Gateway. Test the Services using Selenium, Postman; Integrate the services with Back-end infra components. o Hands on experience with Broadcom API Gateway implementations (Layer 7).  Or any similar experience on API services using any tool. o Java/J2EE experience along with awareness of SiteMinder integrations with Broadcom API Gateway o Hands on experience on AWS platform and tools like Selenium, Postman.

Posted 6 days ago

Apply

0 years

5 - 8 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of IT Security Analyst In this role, you will: The expectation of the individual is to have in depth understanding of Application Management and associated principles/policies/Processes and Tools. Work with our business partners (IAM teams around the Group) to implement effective information technology processes to achieve the business partner’s objectives. Deliver IAM services in accordance with Service Level and Performance Level agreements. Support across all sub-functions in IAM - Change management, Operations, Access Reviews, Application Access, Tooling & Support Globally. Follow detailed processes and procedures to identify and respond to these threats and incidents, escalating to Subject Matter Experts based on the severity and potential impact of the threat or incident. Perform and execute activities to ensure end-to-end assurance around security processes & controls. Management of stakeholders and problem solving Requirements To be successful in this role, you should meet the following requirements: Experience and knowledge of processes to support delivery of Identity and Access Management. Proven ability to lead a team delivering a large number of varied initiatives whilst ensuring high quality delivery. Proven experience overseeing operational approaches and tools and assessing effectiveness. Proven experience in setting organizational direction and communicating and implementing overall strategic goals. Highly self-motivated and proactive with very well-developed analytical reasoning and communication skills. Experience of leading and motivating a team of individuals who are both direct reports and stakeholders into delivery of new challenges. Excellent proven presentation and conflict resolution skills. Excellent communication, influencing and interpersonal skills – Leads by example, promotes 2-way communication, tailoring own style and approach to meet audience’s needs, win confidence and credibility. An ability to communicate information effectively at all levels and via a variety of channels. Leadership with confidence and an ability to inspire others - capable of leading and motivating a team of high caliber individuals into new challenges. Acts as a point of reference and is able to respond to day-to-day direction, financial and operational queries. Able to assess impact of decisions and propose reasoned recommendations. Strong understanding of the Risk and Control principles and a proven record of effectively managing risks within business. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 6 days ago

Apply

3.0 years

0 Lacs

Tamil Nadu, India

On-site

About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialised businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporates and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, colour, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business Line/Function BNP Paribas IT teams are providing infrastructure, development and production support services to all applications used worldwide by all business lines. There is a great variety of technologies and infrastructures from legacy systems to cutting edge Cloud technologies. Within BNP Paribas Group IT, the filiere Production Security is in charge of answering operationally to the challenges of cybersecurity with an end-to-end vision and consistently across the Bank. With its identity domain, it offers the Technical IAM services to the group. In SEC18 team, we make support L2 for the technical IAM. Job Title Technical IAM Engineer support Date 01/09/2025 Department ITGP Location: Chennai Business Line / Function Production Security Reports To (Direct) Grade (if applicable) (Functional) Number Of Direct Reports Directorship / Registration: NA Position Purpose Support for the technical IAM Infrastructure (Sailpoint): is a position to maintain and monitor the infrastructure, ensure that the applications are up and running and manage incidents and requests. Responsibilities Direct Responsibilities Infrastructure Maintenance Ensure the IAM infrastructure (SailPoint/ ETAC/ LDAP IDM) is available and functional with a daily Monitoring check Check and remediation: exploration errors, Authorization, Servers integration Provisioning check and remediation on servers to meet KPI expectations Infrastructure/ Application availability & monitoring Incident Management: First user contact: Support users in the resolution of incidents (ServiceNow/Email) Incident analysis and resolution Entry point for Expert escalation Contribution to continuous production optimization (curative and preventive) Service Requests management: Support people of end user queries (ServiceNow/Email) Strong hands-on experience on Web and Application servers Provide technical leadership and Propose improvements related to the support activity (job performance, service request, production incidents) Good knowledge on Incident/Change/Problem management process (ServiceNow). Have good knowledge on setting up monitoring of servers through Dynatrace tools Contributing Responsibilities Contribute to the knowledge transfer with Paris OPS teams Contribute to the definition of procedures and processes necessary for the team Help build team spirit and integrate into BNP Paribas culture Contribute to the regular activity reporting and KPI calculation Contribute to continuous improvement actions Contribute to the acquisition by ISPL team of new skills & knowledge to expand its scope Technical & Behavioral Competencies Knowledge of ITIL General IT infrastructure knowledge Strong infrastructure skills Cloud & OPEN (OS Linux RHEL, Windows Server, Middleware, etc.) Particular knowledge and experience with IAM tool: SAILPOINT/ Cyberark - Entreprise Password vault Good written and spoken English French speaking will be appreciated Measure and identify areas for improving Quality and overall Delivery Able to communicate efficiently Good Team Player Specific Qualifications (if Required) Strong infrastructure skills Cloud & OPEN (OS Linux RHEL, Windows Server, Middleware, etc.) SailPoint expertise. LDAP IDM knowledge nice to have Strong interest in Incident Management with analytical and investigative skills Skills Referential Behavioural Skills: (Please select up to 4 skills) Adaptability Ability to collaborate / Teamwork Client focused Attention to detail / rigor Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to understand, explain and support change Ability to manage a project Ability to develop and adapt a process Education Level Ability to develop others & improve their skills Master Degree or equivalent Experience Level At least 3 years

Posted 6 days ago

Apply

3.0 years

0 Lacs

Gurgaon

On-site

System Administrator Job Description We aim to bring about a new paradigm in medical image diagnostics; providing intelligent, holistic, ethical, explainable and patient centric care. We are looking for innovative problem solvers who love solving problems. We want people who can empathize with the consumer, understand business problems, and design and deliver intelligent products. We are looking for a System Administrator to manage and optimize our on-premise and cloud infrastructure, ensuring reliability, security, and scalability for high-throughput AI workloads. As a System Administrator, you will be responsible for managing servers, storage, network, and compute infrastructure powering our AI development and deployment pipelines. You will ensure seamless handling of large medical imaging datasets (DICOM/NIfTI), maintain high availability for research and production systems Key Responsibilities Infrastructure & Systems Management Manage Linux-based servers, GPU clusters, and network storage for AI training and inference workloads. Configure and maintain message queue systems (RabbitMQ, ActiveMQ, Kafka) for large-scale, asynchronous AI pipeline execution. Set up and maintain service beacons and health checks to proactively monitor the state of critical services (XNAT pipelines, FastAPI endpoints, AI model inference servers). Maintain PACS integration, DICOM routing, and high-throughput data transfer for medical imaging workflows. Manage hybrid infrastructure (on-prem + cloud) including auto-scaling compute for large training tasks. Service Monitoring & Reliability Implement automated service checking for all production and development services using Prometheus, Grafana, or similar tools. Configure beacon agents to trigger alerts and self-healing scripts for service restarts when anomalies are detected. Set up log aggregation and anomaly detection to catch failures in AI processing pipelines early. Ensure 99.9% uptime for mission-critical systems and clinical services. Security & Compliance Enforce secure access control (IAM, VPN, RBAC, MFA) and maintain audit trails for all system activities. Ensure compliance with HIPAA, GDPR, ISO 27001 for medical data storage and transfer. Encrypt medical imaging data (DICOM/NIfTI) at rest and in transit. Automation & DevOps Develop automation scripts for service restarts, scaling GPU resources, and pipeline deployments. Work with DevOps teams to integrate infrastructure monitoring with CI/CD pipelines. Optimize AI pipeline orchestration with MQ-based task handling for scalable performance. Backup, Disaster Recovery & High Availability Manage data backup policies for medical datasets, AI model artifacts, and PostgreSQL/MongoDB databases. Implement failover systems for MQ brokers and imaging data services to ensure uninterrupted AI processing. Collaboration & Support Work closely with AI engineers and data scientists to optimize compute resource utilization. Support teams in troubleshooting infrastructure and service issues. Maintain license servers and specialized imaging software environments. Skills and Qualifications Required: 3+ years of Linux systems administration experience with a focus on service monitoring and high-availability environments. Experience with message queues (RabbitMQ, ActiveMQ, Kafka) for distributed AI workloads. Familiarity with beacons, service health monitoring, self-healing automation. Experience managing GPU clusters (NVIDIA CUDA, drivers, dockerized AI workflows). Hands-on with cloud platforms (AWS, GCP, Azure). Networking fundamentals (firewalls, VPNs, load balancers). Hands-on experience with GPU-enabled servers (NVIDIA CUDA, drivers, dockerized AI workflows). Experience managing large datasets (100GB–TB scale), preferably in healthcare or scientific research. Familiarity with cloud platforms (AWS EC2, S3, EKS or equivalents). Knowledge of cybersecurity best practices and compliance frameworks (HIPAA, ISO 27001). Preferred: Experience with PACS, XNAT, or medical imaging servers. Familiarity with Prometheus, Grafana, ELK stack, SaltStack beacons, or similar monitoring tools. Knowledge of Kubernetes or Docker Swarm for container orchestration. Basic scripting knowledge (Bash, Python) for task automation. Exposure to database administration (PostgreSQL, MongoDB). Scripting skills (Bash, Python, PowerShell) for automation and troubleshooting. Understanding of databases (PostgreSQL, MongoDB) used in AI pipelines. Education: BE/B Tech, MS/M Tech (will be a bonus) Experience: 3-5 Years Job Type: Full-time Work Location: In person

Posted 6 days ago

Apply

50.0 years

4 - 6 Lacs

Gurgaon

On-site

About the Opportunity Job Type: Permanent Application Deadline: 18 August 2025 Job Description Title Platform Engineer Department Global Platform Solutions Location Gurgaon , India Reports To Associate Director Engineering Level 4 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like you’re part of something bigger. About your team The GPS Delivery - Record Keeping team consists of approximately 200 members responsible for developing and maintaining the systems-of-record used to manage the accounts and investments of our more than 1.5 million workplace and retail customers in the UK. In performing these duties, we play a critical role in delivering our core product and value proposition to these clients both currently and in the future About your role As a Cloud Engineer at Fidelity International, you will work with senior business leaders, product owners, and technology teams to develop or enhance the record-keeping platform. Collaborating with the business and technology architects, you will leverage your cloud engineering experience for design, definition, exploration, and delivery of solutions. Key qualifications include:  Agile environment experience using tools like Jira and Confluence  Knowledge of cloud architecture, networking, DevOps toolchains  Proficiency in Python and Unix scripting. You should be passionate about delivering high-quality, scalable solutions while focusing on customer needs and being open to challenges. You'll influence stakeholders, support team formation, and deliver a greenfield solution, collaborating and sharing knowledge with the global team. About you This role requires a proactive engineer with strong technical background and influence, who can work with development teams on technology architecture, cloud practices, troubleshooting, and implementation. Responsibilities:  Provide technical expertise in design and coding.  Collaborate with product owners to identify improvements and customer requirements.  Ensure timely, efficient, and cost-effective delivery.  Manage stakeholders across Technology and Business teams.  Ensure technical solutions meet functional and non-functional requirements and align with Global Technology Strategies.  Serve as a trusted advisor to the business.  Partner with Architecture, business, and central groups within a global team. The ideal candidate will possess over six years of experience as a software engineer, with expertise in the following areas:  Extensive experience with Kubernetes (K8s) for deploying, managing, and maintaining containerized applications.  In-depth knowledge of AWS services, including EC2, VPC, IAM, Serverless offerings, RDS, Route 53, and CloudFront.  Proficiency in leveraging generative AI for daily tasks, utilizing agents and co-pilots to build, test, and manage applications and code.  Comprehensive understanding of agentic, A2A, MCP, RAG, and AI concepts.  Experience with monitoring and logging tools for Kubernetes clusters.  Strong working knowledge of containerization technologies such as Docker.  Proven experience in managing monolithic record-keeping platforms within DevOps and deployment pipelines.  Understanding of UNIX system architecture.  Solid grasp of networking core concepts.  Advanced knowledge and expertise in serverless architecture and related AWS offerings.  Proficiency in Terraform, including core concepts and hands-on implementation.  Hands-on experience with Unix scripting and Python programming.  Practical experience with CI/CD tools such as Jenkins and Ansible.  Familiarity with container technologies which will be advantageous.  Working knowledge of APIs, caching mechanisms, and messaging systems.  Mastery of at least one programming language or framework, such as Java, NodeJS, or Python.  Expertise in test-driven development (TDD) and pair programming best practices alongside CI/CD pipelines.  Excellent communication skills and a keen interest in a collaborative pair-programming environment.  A strong passion for professional growth and addressing challenging problems. This position demands candidates who are committed to continuous learning and capable of tackling complex issues with innovative solutions. Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 6 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and hands-on AI Architect to lead the design and deployment of next-generation AI systems for our cutting-edge platform. You will be responsible for architecting scalable GenAI and machine learning solutions, establishing MLOps best practices, and ensuring robust security and cost-efficient operations across our AI-powered modules Primary Skills: • System architecture for GenAI: design scalable pipelines using LLMs, RAG, multi‐agent orchestration (LangGraph, CrewAI, AutoGen). • Machine‐learning engineering: PyTorch or TensorFlow, Hugging Face Transformers. • Retrieval & vector search: FAISS, Weaviate, Pinecone, pgvector; embedding selection and index tuning. • Cloud infra: AWS production experience (GPU instances, Bedrock / Vertex AI, EKS, IAM, KMS). • MLOps & DevOps: MLflow / Kubeflow, Docker + Kubernetes, CI/CD, Terraform • Security & compliance: data encryption, RBAC, PII redaction in LLM prompts. • Cost & performance optimisation: token‐usage budgeting, caching, model routing. • Stakeholder communication: ability to defend architectural decisions to CTO, product, and investors.

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Role: NIFI Developer Notice period: Notice Serving Candidates or Immediate Joiners Preferred Client: Marriott Payroll: Dminds Work Mode: Remote I nterview Mode: Virtual We’re looking for someone who has built deployed and maintained NIFI clusters. Roles & Responsibilities: ·Implemented solutions utilizing Advanced AWS Components: EMR, EC2, etc integrated with Big Data/Hadoop Distribution Frameworks: Zookeeper, Yarn, Spark, Scala, NiFi etc. ·Designed and Implemented Spark Jobs to be deployed and run on existing Active clusters. ·Configured Postgres Database on EC2 instances and made sure application that was created is up and running, Trouble Shooted issues to meet the desired application state. ·Experience in creating and configuring secure VPC, Subnets, and Security Groups through private and public networks. ·Created alarms, alerts, notifications for Spark Jobs to email and slack group message job status and log in CloudWatch. ·NiFi data Pipeline to process large set of data and configured Lookup’s for Data Validation and Integrity. ·generation large set of test data with data integrity using java which used in Development and QA Phase. ·Spark Scala, improving the performance and optimized of the existing applications running on EMR cluster. ·Spark Job to Convert CSV data to Custom HL7/FHIR objects using FHIR API’s. ·Deployed SNS, SQS, Lambda function, IAM Roles, Custom Policies, EMR with Spark and Hadoop setup and bootstrap scripts to setup additional software’s needed to perform the job in QA and Production Environment using Terraform Scripts. ·Spark Job to perform Change Data Capture (CDC) on Postgres Tables and updated target tables using JDBC properties. ·Kafka Publisher integrated in spark job to capture errors from Spark Application and push into Postgres table. ·extensively on building Nifi data pipelines in docker container environment in development phase. ·Devops team to Clusterize NIFI Pipeline on EC2 nodes integrated with Spark, Kafka, Postgres running on other instances using SSL handshakes in QA and Production Environments.

Posted 6 days ago

Apply

5.0 - 8.0 years

0 Lacs

India

On-site

Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Qualifications Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment CDK Knowledge in AWS DevOps. Tools: Experience with Terraform and Kubernetes.

Posted 6 days ago

Apply

5.0 - 8.0 years

0 Lacs

India

On-site

Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Qualifications Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment. CDK Knowledge in AWS DevOps. Tools: Experience with Terraform and Kubernetes.

Posted 6 days ago

Apply

5.0 years

4 - 8 Lacs

Noida

On-site

Job Description: Identity Management Architect / Lead Engineer Location: [Noida, India / Hybrid] Job Summary: We are seeking an experienced Identity Management Architect to establish and lead the foundational Identity and Access Management (IAM) framework in our organization. As the first dedicated IAM professional, you will play a critical role in defining and implementing identity governance, authentication, authorization, and privileged access management solutions to ensure security, compliance, and efficiency in managing identities across our IT landscape. This role requires a deep understanding of IAM technologies, best practices, and enterprise security frameworks, along with the ability to work cross-functionally to integrate IAM into existing business processes. Key Responsibilities: Strategy & Architecture: Design and implement a scalable Identity & Access Management (IAM) architecture aligned with business and security objectives. Define the identity governance framework , including policies, processes, and technology roadmap for the IDM domain. Develop an IAM maturity model and drive the organization's transition towards a unified, secure, and automated identity framework. Identify gaps in the current IAM environment and recommend best practices for identity lifecycle management, authentication, and access control. Collaborate with security, IT, and business teams to ensure IAM aligns with enterprise security policies, compliance requirements, and industry standards (e.g., NIST, ISO 27001, CIS). Implementation & Integration: Deploy and manage IAM solutions such as Active Directory (AD), Azure AD, Okta, Ping Identity, ForgeRock, SailPoint, CyberArk, or similar platforms . Establish Single Sign-On (SSO), Multi-Factor Authentication (MFA), and Zero Trust Architecture (ZTA) strategies across applications and services. Define and automate identity lifecycle management (provisioning, deprovisioning, access reviews) using Identity Governance and Administration (IGA) tools. Implement Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), and Least Privilege Access policies. Work with application owners to integrate IAM with SaaS, on-premise, and cloud environments (AWS, Azure, Google Cloud). Governance & Security: Establish and enforce identity governance policies , including privileged access management (PAM) and identity auditing. Implement Identity Threat Detection & Response (ITDR) to mitigate identity-related risks. Define IAM metrics and KPIs to measure adoption, effectiveness, and security posture. Ensure compliance with regulatory requirements and industry standards such as NIST , ISO , GDPR, DORA Collaboration & Leadership: Serve as the subject matter expert (SME) for IAM across IT, security, and business teams. Develop and deliver training programs on IAM best practices for internal stakeholders. Act as the primary liaison for IAM initiatives, working closely with Director Information Security, IT leadership, and security operations teams . Mentor junior IT and security team members on IAM principles and technologies. Required Qualifications & Skills: Technical Skills & Experience: 5+ years of experience in Identity & Access Management (IAM) , Security Architecture, or related fields. Hands-on experience with IAM platforms such as EntraID, Okta, Ping Identity, ForgeRock, SailPoint, CyberArk, or equivalent. Expertise in Active Directory (AD) and EntraID , including federation, authentication protocols (SAML, OAuth, OIDC, Kerberos, LDAP). Experience with cloud identity management and integrating IAM with Azure & AWS Knowledge of Zero Trust, Privileged Access Management (PAM), and Identity Governance and Administration (IGA) . Strong scripting and automation skills in PowerShell, Python for IAM automation. Experience with IAM analytics, identity threat detection, and risk-based authentication . Familiarity with IAM integration with ITSM tools like JIRA . Soft Skills: Strong analytical and problem-solving abilities with a strategic mindset . Ability to communicate complex IAM concepts to both technical and non-technical audiences . Experience leading IAM projects in enterprise environments with a mix of cloud and on-prem systems. Ability to drive IAM adoption and governance without a dedicated IAM team. Strong stakeholder management and leadership skills. Preferred Certifications: CISSP (Certified Information Systems Security Professional) Certified Identity and Access Manager (CIAM) Microsoft Certified: Identity and Access Administrator Associate Azure Security Certifications Why Join Us? Opportunity to build IAM from the ground up in an evolving IT environment. Work on cutting-edge cloud security and identity management projects. Collaborate with a dynamic team that values innovation and security best practices . Competitive salary, benefits, and career growth opportunities. AML RightSource is committed to fostering a diverse work environment and is proud to be an equal opportunity employer. We provide equal employment opportunities to all qualified applicants without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Noida

On-site

NVIDIA is looking for a passionate member to join our DGX Cloud Engineering Team as a Cloud Software Engineer. In this role, you will play a significant part in helping to craft and guide the future of AI & GPUs in the Cloud. NVIDIA DGX Cloud is a cloud platform tailored for AI tasks, enabling organizations to transition AI projects from development to deployment in the age of intelligent AI. Are you passionate about cloud software development and strive for quality? Do you pride yourself in building cloud-scale software systems? If so, join our team at NVIDIA, where we are dedicated to delivering GPU-powered services around the world! What you'll be doing: You will play a crucial role in ensuring the success of the DGX Cloud platform by helping to build our development and release processes, creating world-class performance and quality measurement and regression management tools, and maintaining a high standard of excellence in our CI/CD, release engineering tools and processes. Design, build, and implement scalable cloud-based systems for PaaS/IaaS. Work closely with other teams on new products or features/improvements of existing products. Develop, maintain and improve CI/CD tools for on-prems and cloud deployment of our software. Collaborate with developers, QA and Product teams to establish, refine and streamline our software release process. Support, maintain, and document software functionality. What we need to see: Demonstrate understanding of cloud design in the areas of virtualization and global infrastructure, distributed systems, and security. Expertise in Kubernetes (K8s) & KubeVirt. Background with building RESTful web services. Experience with Docker and Containers. Experience with Infrastructure as Code. Background with CSPs, for example: AWS (Fargate, EC2, IAM, ECR, EKS, Route53 etc...). Experience with Continuous Integration and Continuous Delivery. Excellent interpersonal and written communication skills required. BS or MS in Computer Science or equivalent program from an accredited University/College. 3+ years of hands-on software engineering or equivalent experience. Ways to stand out from the crowd: Expertise in Virtualization technologies such as Firecracker, KVM, OpenStack, Nutanix AHV & Redhat OpenShift. A track record of solving complex problems with elegant solutions. Go & Python/load testing frameworks/ secrets management Demonstrate delivery of complex projects in previous roles. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.

Posted 6 days ago

Apply

15.0 years

0 Lacs

Calcutta

On-site

Project Role : Security Architect Project Role Description : Define the cloud security framework and architecture, ensuring it meets the business requirements and performance goals. Document the implementation of the cloud security controls and transition to cloud security-managed operations. Must have skills : Identity Access Management (IAM) Good to have skills : Microsoft Active Directory Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Security Architect, you will define the cloud security framework and architecture, ensuring it meets the business requirements and performance goals. Your typical day will involve collaborating with various teams to assess security needs, designing security solutions, and documenting the implementation of cloud security controls. You will also engage in discussions to refine security strategies and ensure a smooth transition to cloud security-managed operations, all while staying updated on the latest security trends and technologies. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the development and documentation of security policies and procedures. - Evaluate and recommend security technologies and tools to enhance the security posture. Professional & Technical Skills: - Must To Have Skills: Proficiency in Identity Access Management (IAM). - Good To Have Skills: Experience with Microsoft Active Directory. - Strong understanding of cloud security principles and best practices. - Experience in risk assessment and vulnerability management. - Familiarity with compliance frameworks such as ISO 27001, NIST, or GDPR. Additional Information: - The candidate should have minimum 2 years of experience in Identity Access Management (IAM). - This position is based at our Kolkata office. - A 15 years full time education is required. 15 years full time education

Posted 6 days ago

Apply

5.0 years

0 Lacs

West Bengal

On-site

Job Information Date Opened 30/07/2025 Job Type Full time Industry IT Services Work Experience 5+ Years City Kolkata Province West Bengal Country India Postal Code 700091 About Us We are a fast growing technology company specializing in current and emerging internet, cloud and mobile technologies. Job Description CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier . Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS , Parameter Store , and Secrets Manager . Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services o Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery o Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity o Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability o Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services o Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance o Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps o Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS) , implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2 , and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS , Terraform , CI/CD tools , and Kubernetes (EKS) . Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications: Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity

Posted 6 days ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position: Cloud Engineer Experience- 3-5 years Location : Noida Work Mode: WFO The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practice the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance and can enhance system health monitoring on the cloud stack. Must have excellent communication and verbal skills. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, CloudFormation, CloudWatch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross functionally within multiple Client organizations Responsibilities Responsibilities include planning, automation, implementations and maintenance of the AWS platform and its associated services Provide SME / L2 and above level technical support Carry out deployment and migration activities Must be able to mentor and provide technical guidance to L1 engineers Monitoring of AWS infrastructure and perform routine maintenance, operational tasks Work on ITSM tickets and ensure adherence to support SLAs Work on change management processes Excellent analytical and problem-solving skills. Exhibits excellent service to others Qualifications At least 2 to 3 years of relevant experience on AWS Overall, 3-5 years of IT experience working for a global Organization Bachelor’s Degree or higher in Information Systems, Computer Science, or equivalent experience. Certified AWS Cloud Practitioner will be preferred. Location: Noida - UI, Noida, Uttar Pradesh, India

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Overall Objectives of Job  Administration of One Identity tool and management of integrated Identities and Services.  Devops Engineer with expertise in Kubernetes, Docker, Azure, AWS, Deployment Vmware  Management of cloud and on-prem infrastructures hosting IAM.  Working knowledge on One identity tools : 1IM Manager / Object Browser / Job Queue / Synchronization editor  Understanding of the whole IAM environment , Active Directory Multi forest environment at an enterprise level, Windows OS, IIS, MS SQL server  Monitor, Report and Analysis of bugs during and after IAM release versions.  Performance management of IAM tools, database and Infrastructure.  Administration of Identities and Services integrated with the One IDM tool. Support for Organization integration with the IAM Infra.  Collaborate and work with the onshore development and project team to provide solutions and assist during Project release, testing and for operational support.  Responsible for management of incident, problem and change within the IAM Infrastructure.  Responsible for documentation and update of IAM Processes and operating procedures.  Work with Software Development tool (e.g., JIRA) and handle various IAM related tasks. Experience: • 3 or more years in Enterprise IT with core focus on IAM Technologies line One Identity or similar IAM tools along with DevOps working model. Technical • Experience in One Identity tool (preferred) operations or similar IAM tools. • Devops Engineer with expertise in Kubernetes, Docker, Azure, AWS, Deployment Vmware • Knowledge in DevOps tools of Github/Azure Kubernetes/pipeline deployment. • Knowledge of Jenkins Automation tool , IAAS , Infrastructure background. • Knowledge in MS-SQL • Knowledge in DNS, TCP/IP, network technologies • Knowledge of incident, problem, change process handling Functional / Domain Experience in IAM solutions with strong knowledge of IAM concepts and understanding of security, risks, and governance.

Posted 6 days ago

Apply

4.0 - 12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Title : Google Cloud DevOps Engineer Location : PAN India The Opportunity: Publicis Sapient is looking for a Cloud & DevOps Engineer to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. Contribute ideas for improvements in Cloud and DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business. Lead and support the implementation of Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time-to-market for their products or services. Your Skills & Experience: 4 to 12 years of experience in Cloud & DevOps with Full time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: GCP (Compute, IAM, VPC, Storage, Serverless, Database, Kubernetes, Pub-Sub, Operations Suit) Configuration and monitoring DNS, APP Servers, Load Balancer, Firewall for high volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging and alerting using below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic Good to Have: Associate Level Public Cloud Certifications Terraform Associate Level Certification Benefits of Working Here: Gender-Neutral Policy 18 paid holidays throughout the year for NCR/BLR (22 For Mumbai) Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Learn more about us at www.publicissapient.com or explore other career opportunities here

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

On-site

We need an experienced DevOps Engineer to single-handedly build our Automated Provisioning Service on Google Cloud Platform. You'll implement infrastructure automation that provisions complete cloud environments for B2B customers in under 10 minutes. Core Responsibilities: Infrastructure as Code Implementation Develop Terraform modules for automated GCP resource provisioning Create reusable templates for: GKE cluster deployment with predefined node pools Cloud Storage bucket configuration Cloud DNS and SSL certificate automation IAM roles and service account setup Implement state management and version control for IaC Automation & Orchestration Build Cloud Functions or Cloud Build triggers for provisioning workflows Create automation scripts (Bash/Python) for deployment orchestration Deploy containerized Node.js applications to GKE using Helm charts Configure automated SSL certificate provisioning via Certificate Manager Security & Access Control Implement IAM policies and RBAC for customer isolation Configure secure service accounts with minimal required permissions Set up audit logging and monitoring for all provisioned resources Integration & Deployment Create webhook endpoints to receive provisioning requests from frontend Implement provisioning status tracking and error handling Document deployment procedures and troubleshooting guides Ensure 5-10 minute provisioning time SLA Required Skills & Certifications: MANDATORY Certification (Must have one of the following): Google Cloud Associate Cloud Engineer (minimum requirement) Google Cloud Professional Cloud DevOps Engineer (preferred) Google Cloud Professional Cloud Architect (preferred) Technical Skills (Must Have): 3+ years hands-on experience with Google Cloud Platform Strong Terraform expertise with proven track record GKE/Kubernetes deployment and management experience Proficiency in Bash and Python scripting Experience with CI/CD pipelines (Cloud Build preferred) GCP IAM and security best practices knowledge Ability to work independently with minimal supervision Nice to Have: Experience developing RESTful APIs for service integration Experience with multi-tenant architectures Node.js/Docker containerization experience Helm chart creation and management Deliverables (2-Month Timeline) Month 1: Complete Terraform modules for all GCP resources Working prototype of automated provisioning flow Basic IAM and security implementation Integration with webhook triggers Month 2: Production-ready deployment with error handling Performance optimization (achieve <10 min provisioning) Complete documentation and runbooks Handover and knowledge transfer Technical Environment Primary Tools: Terraform, GCP (GKE, Cloud Storage, Cloud DNS, IAM) Languages: Bash, Python (automation scripts) Orchestration: Cloud Build, Cloud Functions Containerization: Docker, Kubernetes, Helm Ideal Candidate Self-starter who can own the entire DevOps scope independently Strong problem-solver comfortable with ambiguity Excellent time management skills to meet tight deadlines Clear communicator who documents their work thoroughly Important Note: Google Cloud certification is mandatory for this position due to partnership requirements. Please include your certification details and ID number in your application. Application Requirements: Proof of valid Google Cloud certification Examples of similar GCP automation projects GitHub/GitLab links to relevant Terraform modules (if available)

Posted 6 days ago

Apply

6.0 years

18 - 30 Lacs

India

On-site

Role: Senior Database Administrator (DevOps) Experience: 7+ Type: Contract Job Summary We are seeking a highly skilled and experienced Database Administrator with a minimum of 6 years of hands-on experience managing complex, high-performance, and secure database environments. This role is pivotal in maintaining and optimizing our multi-platform database infrastructure , which includes PostgreSQL, MariaDB/MySQL, MongoDB, MS SQL Server , and AWS RDS/Aurora instances. You will be working primarily within Linux-based production systems (e.g., RHEL 9.x) and will play a vital role in collaborating with DevOps, Infrastructure, and Data Engineering teams to ensure seamless database performance across environments. The ideal candidate has strong experience with infrastructure automation tools like Terraform and Ansible , is proficient with Docker , and is well-versed in cloud environments , particularly AWS . This is a critical role where your efforts will directly impact system stability, scalability, and security across all environments. Key Responsibilities Design, deploy, monitor, and manage databases across production and staging environments. Ensure high availability, performance, and data integrity for mission-critical systems. Automate database provisioning, configuration, and maintenance using Terraform and Ansible. Administer Linux-based systems for database operations with an emphasis on system reliability and uptime. Establish and maintain monitoring systems, set up proactive alerts, and rapidly respond to performance issues or incidents. Work closely with DevOps and Data Engineering teams to integrate infrastructure with MLOps and CI/CD pipelines. Implement and enforce database security best practices, including data encryption, user access control, and auditing. Conduct root cause analysis and tuning to continuously improve database performance and reduce downtime. Required Technical Skills Database Expertise: PostgreSQL: Advanced skills in replication, tuning, backup/recovery, partitioning, and logical/physical architecture. MariaDB/MySQL: Proven experience in high availability configurations, schema optimization, and performance tuning. MongoDB: Strong understanding of NoSQL structures, including indexing strategies, replica sets, and sharding. MS SQL Server: Capable of managing and maintaining enterprise-grade MS SQL Server environments. AWS RDS & Aurora: Deep familiarity with provisioning, monitoring, auto-scaling, snapshot management, and failover handling. Infrastructure & DevOps 6+ years of experience as a Database Administrator or DevOps Engineer in Linux-based environments. Hands-on expertise with Terraform, Ansible, and Infrastructure as Code (IaC) best practices. Knowledge of networking principles, firewalls, VPCs, and security hardening. Experience with monitoring tools such as Datadog, Splunk, SignalFx, and PagerDuty for observability and alerting. Strong working experience with AWS Cloud Services (EC2, VPC, IAM, CloudWatch, S3, etc.). Exposure to other cloud providers like GCP, Azure, or IBM Cloud is a plus. Familiarity with Docker, container orchestration, and integrating databases into containerized environments. Preferred Qualifications Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to collaborate in cross-functional teams and drive initiatives independently. A passion for automation, observability, and scalability in production-grade environments. Must Have: AWS, Ansible, DevOps, Terraform Skills: postgresql,mariadb,datadog,containerization,networking,linux,mongodb,devops,terraform,aws aurora,cloud services,amazon web services (aws),ms sql server,ansible,aws,mysql,aws rds,docker,infrastructure,database

Posted 6 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within PWC Responsibilities Job Title: Cloud Engineer (Java 17+, Spring Boot, Microservices, AWS) Job Type: Full-Time Job Overview: As a Cloud Engineer, you will be responsible for developing, deploying, and managing cloud-based applications and services on AWS. You will use your expertise in Java 17+, Spring Boot, and Microservices to build robust and scalable cloud solutions. This role will involve working closely with development teams to ensure seamless cloud integration, optimizing cloud resources, and leveraging AWS tools to ensure high availability, security, and performance. Key Responsibilities: Cloud Infrastructure: Design, build, and deploy cloud-native applications on AWS, utilizing services such as EC2, S3, Lambda, RDS, EKS, API Gateway, and CloudFormation. Backend Development: Develop and maintain backend services and microservices using Java 17+ and Spring Boot, ensuring they are optimized for the cloud environment. Microservices Architecture: Architect and implement microservices-based solutions that are scalable, secure, and resilient, ensuring they align with AWS best practices. CI/CD Pipelines: Set up and manage automated CI/CD pipelines using tools like Jenkins, GitLab CI, or AWS CodePipeline for continuous integration and deployment. AWS Services Integration: Integrate AWS services such as DynamoDB, SQS, SNS, CloudWatch, and Elastic Load Balancing into microservices to improve performance and scalability. Performance Optimization: Monitor and optimize the performance of cloud infrastructure and services, ensuring efficient resource utilization and cost management in AWS. Security: Implement security best practices in cloud applications and services, including IAM roles, VPC configuration, encryption, and authentication mechanisms. Troubleshooting & Support: Provide ongoing support and troubleshooting for cloud-based applications, ensuring uptime, availability, and optimal performance. Collaboration: Work closely with cross-functional teams, including frontend developers, system administrators, and DevOps engineers, to ensure end-to-end solution delivery. Documentation: Document the architecture, implementation, and operations of cloud infrastructure and applications to ensure knowledge sharing and compliance. Required Skills & Qualifications: Strong experience with Java 17+ (latest version) and Spring Boot for backend development. Hands-on experience with AWS Cloud services such as EC2, S3, Lambda, RDS, EKS, API Gateway, DynamoDB, SQS, SNS, and CloudWatch. Proven experience in designing and implementing microservices architectures. Solid understanding of cloud security practices, including IAM, VPC, encryption, and secure cloud-native application development. Experience with CI/CD tools and practices (e.g., Jenkins, GitLab CI, AWS CodePipeline). Familiarity with containerization technologies like Docker, and orchestration tools like Kubernetes. Ability to optimize cloud applications for performance, scalability, and cost-efficiency. Experience with monitoring and logging tools like CloudWatch, ELK Stack, or other AWS-native tools. Knowledge of RESTful APIs and API Gateway for exposing microservices. Solid understanding of version control systems like Git and familiarity with Agile methodologies. Strong problem-solving and troubleshooting skills, with the ability to work in a fast-paced environment. Preferred Skills: AWS certifications, such as AWS Certified Solutions Architect or AWS Certified Developer. Experience with Terraform or AWS CloudFormation for infrastructure as code. Familiarity with Kubernetes and EKS for container orchestration in the cloud. Experience with serverless architectures using AWS Lambda. Knowledge of message queues (e.g., SQS, Kafka) and event-driven architectures. Education & Experience: Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent practical experience. 7-11 years of experience in software development with a focus on AWS cloud and microservices. Mandatory Skill Sets Cloud Engineer (Java+Springboot+ AWS) Preferred Skill Sets Cloud Engineer (Java+Springboot+ AWS) Years Of Experience Required 7-11 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Cloud Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 6 days ago

Apply

8.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary: We are seeking a highly experienced Senior Project Manager to lead and deliver critical initiatives focused on Google Cloud Platform (GCP) implementation and migration. The ideal candidate will have a solid background in managing complex IT and cloud infrastructure projects, with hands-on experience overseeing end-to-end GCP deployment. GCP certification is good to have. Key Responsibilities: Lead full lifecycle project management for GCP implementation, including planning, execution, monitoring, and closure. Collaborate with cross-functional teams (engineering, infrastructure, security, DevOps, etc.) to ensure successful cloud migration and adoption. Manage project scope, schedule, cost, quality, resources, and communication across all project phases. Identify and manage project risks, dependencies, and mitigation strategies proactively. Develop and maintain detailed project plans, dashboards, and status reports for stakeholders and leadership. Drive alignment with business and IT leadership to ensure strategic project outcomes. Work closely with GCP architects and engineers to ensure platform configurations and deployments meet business requirements. Ensure adherence to project governance frameworks and compliance requirements. Facilitate change management and communication activities with impacted teams. Qualifications -Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field. -8 - 10 years of experience in project management with a focus on IT and cloud infrastructure projects. -Proven experience managing GCP implementation or migration projects end-to-end. -Strong understanding of cloud architecture and GCP services (Compute Engine, BigQuery, Cloud Storage, IAM, VPC, etc.). -Familiarity with Agile/Scrum, DevOps practices, and CI/CD pipelines. -Proficiency with project management tools like JIRA, MS Project, Smartsheet, Confluence, or similar. -PMP, PMI-ACP, CSM, or equivalent project management certification (Preferred) -Google Cloud Digital Leader (or above) certification (Preferred)

Posted 6 days ago

Apply

4.0 - 6.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Experience Required: 4-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) 🔧 Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services. Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform, ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling, and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations, handle outages with calm, and conduct postmortems. 🧰 Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures 💡 Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) 👤 Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams

Posted 6 days ago

Apply

0.0 - 3.0 years

12 - 20 Lacs

Mumbai, Maharashtra

On-site

Looking for highly skilled AWS DevOps Engineer to design, implement, and manage cloud infrastructure solutions For AI Products. The ideal candidate will have hands-on experience in deploying scalable, secure, and high-performing cloud environments, ensuring alignment with business objectives. Key Responsibilities Design, implement, and manage AWS cloud infrastructure using services like EC2, S3, RDS, Lambda, Route 53, EKS, VPC, and Cloud Formation. Automation & CI/CD: Develop Infrastructure as Code (IaC) with Terraform /Cloud Formation and automate deployments using CI/CD tools like Jenkins, Code Pipeline, or GitHub Actions. Implement best practices for cloud security, compliance (e.g., RBI, SEBI regulations), and data protection (IAM, KMS, Guard Duty). Set up monitoring (Cloud Watch, Cloud Trail) and optimize performance, cost, and resource utilization. Configure and manage networks, VPCs, VPNs, Subnets, and Route Tables to ensure secure and efficient network operations. Work closely with security, and development teams to support product development and deployments. Maintain clear and comprehensive documentation for infrastructure, configurations, and processes. Key Skills: 4+ years of hands-on experience with AWS services and solutions. Candidate should have HANDS ON Experience of Designing, Configuring, Implementing and setting up the environment with the technologies Expertise in Infrastructure as Code (IaC) tools: Terraform, Cloud Formation. Prior Experience in building cloud infra for AI products Strong scripting skills: Python, Bash, or Shell. Experience with containerization and orchestration: Docker, Kubernetes (EKS). Proficiency in CI/CD tools: Jenkins, AWS Code Pipeline, GitHub or bit bucket Actions. Solid understanding of security and compliance in cloud environments. AWS Certifications (preferred) Location: Mumbai (Work from office only) Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,000,000.00 per year Schedule: Day shift Ability to commute/relocate: Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Post selection, can you join immediately? or within 30 days? Experience: AWS DevOps: 3 years (Required) Work Location: In person

Posted 6 days ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role: Junior Java Developer Expereince: 1 year Location: Chennai About CloudNow Technologies At CloudNow Technologies , we specialize in delivering advanced IT services and solutions by leveraging cutting-edge technologies across DevOps, Agile, Data Analytics , and Cybersecurity . Our agile work culture emphasizes continuous learning and professional growth. We believe in promoting from within and actively focus on upskilling our talent to take on future leadership roles . We are the creators of Akku – our flagship Identity and Access Management (IAM) platform – a powerful, enterprise-grade solution designed to meet modern security and compliance needs. Akku enables organizations to implement zero-trust architecture , enhance cloud security , and manage access controls effectively across users, devices, and applications. Our strong focus on cybersecurity helps enterprises safeguard their digital environments through advanced access management , multi-factor authentication (MFA) , device and IP restrictions , password policy enforcement , and user lifecycle automation . To know more about us, visit: www.cloudnowtech.com Explore Akku IAM platform: www.akku.work Job Description: The position is for a Junior Java Developer. This role involves doing development involving essential skills of Java with concepts of OOPS, Collections, Multi-Threading, SQL, Spring Core, SQL, etc. Knowledge of working in an Agile Team with DevOps principles would be an additional advantage. This would also involve intensive interaction with the business and other technology groups; hence, strong communication skills and the ability to work under tight deadlines are necessary. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high level of ownership within a demanding working environment. Responsibilities: Developing and managing custom integration solutions To work with Agile methodology and environment To create UML class diagrams To perform source code management and versioning To bring together existing systems and focus on the integration of applications Technology Stack: Primary Skills Core Java 1.8 and above OOPS, Multithreading Spring Boot Service Oriented Architecture / Web Services – REST API Hibernate and JPA MYSQL Secondary Skills Spring Framework, SQL, Agile development approach. Markup Languages like XML and JSON JUnit,Smtp Eclipse / IntelliJ, GitLab/Versioning Controlling Tool

Posted 6 days ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier. Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS, Parameter Store, and Secrets Manager. Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS), implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2, and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS, Terraform, CI/CD tools, and Kubernetes (EKS). Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies