Home
Jobs

7730 Terraform Jobs - Page 20

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

12 - 17 Lacs

Gurugram

Work from Office

Naukri logo

Date 9 Jun 2025 Location: Gurgaon, HR, IN Company Alstom Req ID:487548 RESPONSIBILITIES Design, develop, and maintain cloud infrastructure using Azure and MS Fabric Architect and implement cloud solutions leveraging Microsoft Azure services and MS Fabric. Ensure the infrastructure supports scalability, reliability, performance, and cost-efficiency. Integrate containerization and orchestration technologies Utilize Kubernetes and Docker for containerization and orchestration. Manage and optimize Azure Kubernetes Service (AKS) deployments. Implement DevOps practices and automation Develop CI/CD pipelines to automate code deployment and infrastructure provisioning. Use automation tools and Terraform to streamline operations and reduce manual intervention. Collaborate with development teams to build and deploy cloud-native applications Provide guidance and support for designing and implementing cloud-native applications. Ensure applications are optimized for cloud environments. Monitor, troubleshoot, and optimize cloud infrastructure Implement monitoring and alerting systems to ensure infrastructure health. Optimize resource usage and performance to reduce costs and improve efficiency. Develop cost optimization strategies for efficient use of Azure resources. Troubleshoot and resolve issues quickly to minimize impact on users. Ensure high availability and uptime of applications. Enhance system security and compliance Implement security best practices and ensure compliance with industry standards. Perform regular security assessments and audits Qualifications & Skills: EDUCATION University backgroundBachelors/Masters degree in computer science & information systems or related engineering. BEHAVIORAL COMPETENCIES Outstanding Technical leader with proven hands on in configuration and deployment of DevOps towards successful delivery. Be Innovative and be aligned to new product development technologies and methods. Demonstrate excellent communication skills and able to guide, influence and convince others in a matrix organization. Demonstrated teamwork and collaboration in a professional setting Proven capabilities with worldwide teams Team Player with prior experience in working with European customer is not mandatory but preferable. RefLeadership Dimensions TECHNICAL COMPETENCIES & EXPERIENCE 5 to 10 years in IT and/or digital companies or startups Knowledge of ansible. Extensive knowledge of cloud technologies, particularly Microsoft Azure and MS Fabric. Proven experience with containerization and orchestration tools such as Kubernetes and Docker. Experience with Azure Kubernetes Service (AKS), Terraform, and DevOps practices. Strong automation skills, including scripting and using automation tools. Proven track record in designing and implementing cloud infrastructure. Experience in optimizing cloud resource usage and performance. Proven experience in Azure cost optimization strategies. Proven experience ensuring uptime of applications and rapid troubleshooting in case of failures. Strong understanding of security best practices and compliance standards. Proven experience providing technical guidance to teams. Proven experience in managing customer expectations. Proven track record of driving decisions collaboratively, resolving conflicts, and ensuring follow-through. Extensive knowledge of software development and system operations. Proven experience in designing stable solutions, testing, and debugging. Demonstrated technical guidance with worldwide teams. Demonstrated teamwork and collaboration in a professional setting. Proven capabilities with worldwide teams. Proficient in English; proficiency in French is a plus Performance Measurements On-Time Delivery (OTD) Infrastructure Reliability and Availability Cost Optimization and Efficiency Application Uptime and Failure Resolution You dont need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, youll be proud. If youre up for the challenge, wed love to hear from you! Important to note As a global business, were an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. Were committed to creating an inclusive workplace for everyone. Job Type:Experienced

Posted 2 days ago

Apply

5.0 - 10.0 years

19 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Date 24 May 2025 Location: Bangalore, KA, IN Company Alstom Req ID:484209 We create smart innovations to meet the mobility challenges oftoday and tomorrow. We design and manufacture a complete range of transportation systems, from high-speed trains to electric busesand driverless trains, as well asinfrastructure, signalling and digital mobility solutions. Joining us meansjoininga truly global community ofmore than 75000 people dedicated to solving real-world mobility challenges and achieving international projects with sustainable local impact. JOB PURPOSE : Reporting to the Compute & Cloud Design Manager (Design Authority), you will help to build the Alstom Transport strategy, the design of compute & cloud services especially public cloud services, the integration of business solution in Alstom Infrastructure landscapeas well asparticipate in any merger /acquisition activities the organization participates in. YOUR JOB : Key Responsibilities & Accountabilities : Strategy: Collecting the business demands from IS&T Business Partners Evaluate new technology solutions, for an integration to the roadmap, in collaboration with Business Partners, Projects, Operations and other Design Authority members. The scope includes all Hosting services (housing, server, storage, backup, systems, middleware, tooling) especially public cloud services Initiate projects as identified in strategic roadmaps Define the service catalog Define and follow technological standards roadmap Participate actively to the IS&T innovations initiatives Contribute to and validate project scoping, general design and b-case. Ensure b-case follow-up (value realization). Ensure handover to projects teams. Involve in gates validation Help the service architects to manage the changes, evaluate the impacts and treat them. MAIN CHALLENGES OF THE ROLE : Design promote and help to adopt public cloud services (IAAS, PAAS etc.) Provide technical assistance on business solutions within Alstom IT environment Contribute to the consolidation of the role of infrastructure cloud architect and gain credibility in front of the businesses and within the IS&T organization. Be a key actor and collaborate with the whole compute & cloud team located worldwide PROFILE: To be considered for this role, candidate need to demonstrate the following skills experience and attributes Graduated with an engineering degree, preferably in information technology, you have more than 5 years of experience in IS&T architecture , especially on hosting and cloud services. You have very good technical skills on different topics and are curious. A keen knowledge on top public cloud providers is demanded Deep Knowledge in SAP Infrastructure Design Certification as Azure solution architect(AZ305 ) ismandatory Scripting PowerShell - Shell, Bash - Ansible - Terraform OSWindows Desktop, RHEL, Ubuntu Devops ToolsGitlab CI, Azure Devops Pipelines, Terraform, Packer, Ansible, Git, REST API You demonstrate good communication, efficiency and ability to execute. You are result oriented and show intellectual curiosity and creativity. Fluent in English, you are eager to work with autonomy in an international environment and to face the challenges of the new Alstom An agile, inclusive and responsiblecultureis the foundation of ourcompanywhere diverse people are offered excellent opportunities to grow, learn and advanceintheir careers.We are committed toencouragingour employeesto reach their full potential,while valuing and respecting them as individuals. Job Type:Experienced

Posted 2 days ago

Apply

3.0 - 7.0 years

20 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

Role Summary We're seeking an experienced Cloud Security Engineer with strong expertise in Azure and GCP platforms. In this role, you'll work at the intersection of cybersecurity and cloud engineering, focusing on implementing security recommendations from Cloud Security Posture Management (CSPM) and Cloud-Native Application Protection Platform (CNAPP) solutions. Key Responsibilities Analyze and prioritize security findings from CSPM and CNAPP tools across Azure, GCP and AWS environments Coordinate and execute remediation activities with cloud engineering teams to address identified vulnerabilities Lead the deployment, configuration, and maintenance of CSPM and CNAPP solutions Develop and implement security automation to streamline remediation processes Create and maintain cloud security documentation, including policies, procedures, and architectural diagrams Participate in security incident response for cloud-related events Provide cloud security expertise during new service deployments and architecture reviews Stay current with evolving cloud security best practices, threats, and compliance requirements Qualifications 4+ years of experience in cloud security across Azure and GCP platforms Demonstrable experience with CSPM and CNAPP tools and methodologies Strong understanding of cloud-native security controls, including IAM, encryption, network security, and logging/monitoring Proficiency in security automation using tools like Terraform, Azure ARM templates, or GCP Deployment Manager Experience with Kubernetes and other containerisation technologies Experience with cloud security frameworks and compliance standards (e.g., CIS, NIST, SOC2) Strong knowledge of DevSecOps principles and practices Excellent communication skills to effectively coordinate with various technical teams Preferred Skills Relevant security certifications (e.g., CCSP, Azure Security Engineer, GCP Professional Cloud Security Engineer) Experience with container security and Kubernetes environments Familiarity with cloud security APIs and CLI tools Background in security architecture or engineering Experience with cloud infrastructure as code (IaC) security scanning

Posted 2 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 2 days ago

Apply

7.0 - 12.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Your Career As a Professional Services Engineer, youll get into the details of our platform. You will push the buttons, flip the levers, turn the knobs. You do not shy away from difficult challenges as it relates to cybersecurity, implementations, and integrations. You are the technical authority and will interact directly with our customers to help them secure their digital environments. The customers are counting on you to perform this work and train their staff. And while experience on the companys platform is desired, even more important is having a solid foundation in networking and security technologies. Qualifications : Proof of 7+ years of professional, hands-on operational experience in the field of Cloud, DevOps or SysOps Experience with at least two of the following clouds: AWS, AZURE, GCP, OCI, including advanced cloud networking Experience in one or more cloud security areas such as Shift Left, CSPM, CWP, KSPM, CIEM, DSPM, and AI-SPM. Understanding of container and container orchestration technologies such as Docker, Kubernetes, and OpenShift Experience with at least one of the IaC automation tools (e.g. Terraform, CloudFormation, ARM templates etc.) Knowledge of CI/CD tools (IDEs, GitLab, GitHub, Jenkins, CircleCI, etc.) Knowledge of script language (Python, Bash) Background in the security domain and cloud security is highly preferred Experience working with customers as a consultant or team lead working with CSPM/CWP/CNAPP Products Your Impact Design and Integrate/deploy Palo Alto Networks Cloud Security (Prisma) solutions into the customer’s environment in either public cloud (AWS/AZURE/GCP) or private cloud (Kubernetes, OpenStack) environments Be at the forefront of all Palo Alto Networks Cloud security technologies Build custom security policies and application signatures, configured for our client’s needs Progress and uphold expertise in deploying advanced Palo Alto Networks features and functionality Analyze logs and events from the solution to perform initial troubleshooting and issue identification Work with our Technical Assistance Center to troubleshoot and diagnose support cases Ensure client needs are met and deliverables produced on time according to specified project deliverables and scope If found suitable please share me your updated resume to Dinak.raj@infinitylabs .in & we will get i touch with you if your expertise meets our expectations. Kind Reards Dinaa

Posted 2 days ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Naukri logo

Key Responsibilities: 1. Collaborate with development teams to design, develop, and maintain infrastructure for our highly available and scalable applications. 2. Automate processes using Python scripting to streamline the deployment and monitoring of our applications. 3. Monitor and manage cloud infrastructure on AWS, including EC2, S3, RDS, and Lambda. 4. Implement and manage CI/CD pipelines for automated testing and deployment of applications. 5. Troubleshoot and resolve production issues, ensuring high availability and performance of our systems. 6. Collaborate with cross-functional teams to ensure security, scalability, and reliability of our infrastructure. 7. Develop and maintain documentation for system configurations, processes, and procedures. Key Requirements: 1. Bachelor's degree in Computer Science, Engineering, or a related field. 2. 5+ years of experience in a DevOps/SRE role, with a strong focus on automation and infrastructure as code. 3. Proficiency in Python scripting for automation and infrastructure management. 4. Hands-on experience with containerization technologies such as Docker and Kubernetes. 5. Strong knowledge of cloud platforms such as AWS, including infrastructure provisioning and management. 6. Experience with monitoring and logging tools such as Prometheus, Grafana, and ELK stack. 7. Knowledge of CI/CD tools like Jenkins or Github Actions. 8. Familiarity with configuration management tools such as Ansible, Puppet, or Chef. 9. Strong problem-solving and troubleshooting skills, with an ability to work in a fast-paced and dynamic environment. 10. Excellent communication and collaboration skills to work effectively with cross-functional teams.

Posted 2 days ago

Apply

5.0 - 10.0 years

20 - 22 Lacs

Navi Mumbai

Work from Office

Naukri logo

We are looking immediate joiner for below position GCP Cloud Engineer Exp :: 5 to 10 Years Location :: Airoli (Navi Mumbai) Job responsibilities Reviewing infra-architecture of application against well architected framework. Design and execute the overall strategic roadmap for the cloud architecture. Deploying Infrastructure for Cloud as well as guiding other teams for deployment infra and application. Guide and support application team through cloud journey from 0-day to go-live. Collaborating with engineering and development teams to evaluate and identify optimal cloud solutions Define standards (and/or select cloud vendor products) for the overall architecture in coordination with the solution architects and engineering leads Continue improving cloud product reliability, availability, maintainability & cost/benefitincl. developing fault-tolerant tools to ensure general robustness of the cloud infra Manage capacity across public and private cloud resource pools–incl. automating scale down/up of environments Support developers in optimizing and automating cloud engineering activities –e.g. real time migration, provisioning and deployment, etc. Provide inputs to IT financial management for cloud –costs associated with capacity build-out, forecast for cloud IT investments, etc. Developing and maintaining cloud solutions in accordance with best practices. Ensuring efficient functioning of cloud resources/ functions in accordance with company security policies and best practices in cloud security Educate teams on the implementation of new cloud-based initiatives Employ exceptional problem-solving skills, with the ability to see and solve issues before they affect business productivity Orchestrating and automating cloud-based platforms throughout the company. Requirements 10+ years of experience in engineering infrastructure design,4+ years in cloud engineering roles with experience. Expertise in developing terraform code. Expertise in deploying GCP infrastructure. Expertise in deploying GCP services Expertise in troubleshooting deployment /network related issues for cloud infrastructure. Experience in deploying using terraform enterprise and GitHub. Basic familiarity with network and security features e.g. cloud network topology, BGP, routing, TCP/IP, DNS, SMTP, HTTPS, Security, Guardrails etc. High availability engineering experience (region, availability zone, data replication clustering) Awareness in open-source tools & scripting language (powershell, Shell). Deep understanding of software development lifecycles and cloud economics, incl. knowledge of consumption-driven TCO Good knowledge of security implications of public & private cloud infra design Azure certifications preferred. Basic Database experience, including knowledge of SQL and NoSQL, and related data stores such as Postgres. Good communication and collaboration skills. Client management skills.

Posted 2 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the role Network Systems Engineers are responsible to support quality solutions for our internal consumers. They will do BAU support to the overall Tesco Network Objectives, improving the efficiency of services deployed in the Infra. Systems Engineers to have knowledge of latest technology trends, expertise & practical knowledge of the types of systems, infrastructure used within Tesco. You will be responsible for Collaborate with teams across Tesco Technology and vendors for service and project delivery. Own and Manage incidents to resolution. Identify & Drive Service Improvements with operations engineering. Own all technical aspects of Network Infrastructure and lead the automation drive. Deliver programmes with business and/or technical risks assessment. Possess knowledge of infrastructure engineering and best practices. Lead and represent network operations for transformational projects. Drive operational efficiency with process simplification and automation. Lead major incidents to resolution and represent network operations to suppliers and internal stakeholders and further work on Root Cause Analysis and Post Incident Reviews Identify service improvement opportunities and develop business cases. Initiate and lead Proof Of Concepts for new technologies and platforms. Understand budgeting and procurement processes. Define Standards and Procedures. You will need Must have Technical Skills for this job role. Configuring and Troubleshooting of Routing and Switching infrastructure Configuring and Troubleshooting of Security Infrastructure – Firewall, Proxy, IDS/IPS, DDoS, WAF Configuring and Troubleshooting of Load Balancer Infrastructure Thorough understanding of DNS, DHCP and IPAM Configuration and Troubleshooting of DDI Infrastructure Working knowledge with external DNS providers Detailed understanding on http/https protocols Working Knowledge on Splunk – Queries, Reports, Integration Scripting and Automation experience Good to have skills for this job role. Configuring and Troubleshooting AKAMAI CDN properties Configuring and Troubleshooting AKAMAI Web Application Firewall Automation using Terraform. Working experience with Zscalar. Product Knowledge relevant for the role Palo Alto, Cisco ASA, Arista R&S, Zscalar, F5 Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations - from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less

Posted 2 days ago

Apply

4.0 - 6.0 years

9 - 13 Lacs

Pune

Work from Office

Naukri logo

Role Description Deutsche Banks cryptography engineering and solution department is part of the Chief Security Office (CSO) which determine the cryptography strategy for the bank and support business partners in all questions around crypto, including audit and regulatory support. We are currently looking for an IT Security Developer who will work for various strategic programs within Cryptography Engineering and Solution. What well offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Your key responsibilities Design, implement, and maintain CI/CD pipelines on GCP Continuous analysis and improvement of running services Work closely with operations team to identify potential areas for further automation Incident management Support the operations team in case of incident management. Your skills and experience 4+ year of verifiable experience in a similar role or as an engineer preferred with IT security background Proven experience as a DevOps Engineer, preferably with GCP. Strong knowledge of cloud services, Kubernetes, Docker, and microservices architecture. Proficiency in scripting languages such as Python, Bash, or similar. Experience with Infrastructure as Code (IaC) tools like Terraform or Cloud Formation. An open mind willing to learn new concepts related to IT Security / cryptography. Experience working with Agile development technologies. Good interpersonal and communication skills with proficiency in English. How well support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 2 days ago

Apply

15.0 years

0 Lacs

India

Remote

Linkedin logo

Designation : DevOps Architect Experience : 15+ years Location: Remote (India) Role Develop, execute, maintain, and improve procedures, automation scripts, and infrastructure implementations to support Sycamore SaaS Operations. Education Bachelor’s degree, preferably in Computer Science, Electrical Engineering, Physics, Math or any other related discipline Essential skills Very Strong Linux Knowledge & Troubleshooting Skills; Scripting using – Bash, Python, PowerShell, etc; Kubernetes, helm Charts; Terraform, Ansible; Windows Terminal Services, AD, LDAP; Change, Problem & Incident Management; Implementation awareness of Vulnerability/Penetration Testing, Security; Tools and frameworks used for monitoring, performance management, logging; CI/CD pipeline; SRE – Including Datadog Desired skills Hands-on experience in cloud technology – AWS, Azure – AWS preferred; Strong networking skills Certifications, if any RHEL, Kubernetes, AWS Summary Work with talented DevOps and Cloud operations engineers and architects to deliver Sycamore SaaS product offerings to our Bio-Pharma customers using exciting, cutting-edge technologies. Develop, execute, maintain, and improve procedures, automation scripts, and infrastructure implementations to support Sycamore SaaS Operations. Roles & Responsibilities Provide technical expertise and leadership when needed to SaaS Operations Production Operations teams. Help Implement the Cloud Operations team's goals and deliverables as determined by Sycamore Leadership Ensure smooth operations of Sycamore SaaS products Take Complete ownership of Customer Implementations, including SLA and SLO. Automate, enhance and maintain critical processes in Cloud Operations, such as Change Control, Monitoring & Alerting Drive critical processes in SaaS Operations such as Change Control, Problem & Incident Management, and Reporting, as well as key tools for Monitoring & Alerting Drive Disaster Recovery and failover procedures, training, testing, and team readiness Coordinate focus groups across all teams on process improvements and technical improvements that lead to better stability and reliability Contribute to process improvements and technical improvements that lead to increased stability and reliability Support continuous improvements in SaaS Operations by Developing platform services and tooling for modern cloud operations, including metrics monitoring, CI/CD pipelines, etc. Improving automation of provisioning, deployment, monitoring, alerting, and escalation Support Secure operations by implementing best-in-class recommendations for secure operations Carry out ongoing Production Ops activities with precision and quality Define, build, and deliver a high-quality SaaS Platform for Work with third-party vendors and partners to help develop a complete solution set on the SaaS platform Representing Cloud Operations in InfoSec meetings and developing and driving secure procedures Help obtain and maintain various certifications Being a good team player & a leader when needed for a high-performance Cloud/SaaS delivery team by Reviewing personal/team performance, quality reviews, Manage operations and operational issues. Establish a culture of high performance, ownership, delivery focus, and continuous improvement. Implement and carry out procedures and policies to ensure high-quality SaaS operations with appropriate levels of management controls. Act as an internal contact for platform services issues for a customer Work with cross-functional departments: Sales, Professional Service, Customer Support, Engineering, and QA Essential Experience Has experience in implementing, managing, maintaining, and decommissioning complex cloud based Information system components in a secure and controlled manner. Must be experienced in coordinating cross-functional teams such as support, escalation, and engineering software teams to address product issues successfully. Strong understanding of how to build, scale, and manage complex multi-product/service environments Building lean, automated, scalable support structures versus labor-intensive environments. Strong innovation mindset, analytical skills, excellent oral and written communication skills, and experience effectively communicating project/program mission and objectives. Must exhibit a practical customer service attitude and lead a team in resolving difficult customer situations. Desired Experience Experience in working with cross-functional teams in a customer service environment. Mentoring and training team members. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Site Reliability Engineer / Observability Engineer Public Cloud - Offerings and Delivery – Workforce Mgmt & Delivery Ops / Full - Time / Remote Rackspace is building up its Professional Services Center of Excellence on Application Performance Monitoring Suites. If you enjoy solving complex business problems and can contribute to building next generation of modern applications for our customers helping them understand the connections between application performance, user experience and business outcomes creating amazing customer experiences, with modern interpretations of SRE, Observability using Datadog, New Relic, AppDynamics or Dynatrace, working with their suite of products and integrations, then join us! Rackspace enables businesses to accelerate digital transformation through our innovative data, integration solutions tools that help you fix problems quickly, maintain complex systems and improve code. We believe Datadog, AppDynamics or New Relic will be a large contributor to what we do, and we want talented, creative, and thoughtful individuals to join our team to shape Observability Engineering for our customers. You Will Work with customers and implement Observability solutions Build and maintain scalable systems and robust automation that supports engineering goals Develop and maintain monitoring tools, alerts, and dashboards to provide visibility into system health and performance Proactively gather and analyze both metric and log data from systems and applications to perform anomaly detection, performance tuning, capacity planning and fault isolation Collaborate with development teams to implement and deploy new features and enhancements, ensuring they meet reliability, security and performance standards Collaborate with team members to document and share solutions Maintain a deep understanding of the customer’s business as well as their technical environment Identifying performance bottlenecks, identifying anomalous system behavior, and resolving root cause of service issues You Have: Bachelor’s degree in engineering/computer science or equivalent Senior-level experience with Site Reliability Engineering, DevOps, Code level application support and troubleshooting, AWS Infrastructure design, implementation and optimization, Automation for deployment, scaling and reliability. Experience with observability solutions tools like Splunk, Datadog, SignalFx, etc Experience deploying, maintaining and supporting software applications/services in the AWS ecosystem Proactive approach to identifying problems and solutions Experience writing code with one or more interpreted languages such as Python, PHP, Perl, Ruby,Linux Shell Experience with Terraform or Cloud Formation scripting Experience with configuration management tools like Ansible, Chef or Puppet Experience with standard software development best practices and tools such as code repositories (Git preferred) Experience executing in an agile software development environment Good understanding of pricing/cost models across AWS services, especially compute, storage, and database offerings A clear understanding of network & system Management solutions Excellent organizational and project management skills Excellent communication, critical thinking & analytical skills About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know Show more Show less

Posted 2 days ago

Apply

7.0 - 10.0 years

9 - 19 Lacs

Ahmedabad

Work from Office

Naukri logo

Job Description for Sr. DevOps Engineer: We are seeking a highly skilled Senior DevSecOps/DevOps Engineer with extensive experience in cloud infrastructure, automation, and security best practices. The ideal candidate must have 7+ years of overall experience, with at least 3+ years of direct, hands-on Kubernetes management experience. The candidate must have strong expertise in building, managing, and optimizing Jenkins pipelines for CI/CD workflows, with a focus on incorporating DevSecOps practices into the pipeline. Key Responsibilities: Design, deploy, and maintain Kubernetes clusters in cloud and/or on-premises environments. Build and maintain Jenkins pipelines for CI/CD, ensuring secure, automated, and efficient delivery processes. Integrate security checks (static code analysis, image scanning, etc.) directly into Jenkins pipelines. Manage Infrastructure as Code ( IaC ) using Terraform , Helm , and similar tools. Develop, maintain, and secure containerized applications using Docker and Kubernetes best practices. Implement monitoring, logging, and alerting using Prometheus , Grafana , and the ELK/EFK stack . Implement Kubernetes security practices including RBAC , network policies , and secrets management . Lead incident response efforts, root cause analysis, and system hardening initiatives. Collaborate with developers and security teams to embed security early in the development lifecycle (Shift-Left Security). Research, recommend, and implement best practices for DevSecOps and Kubernetes operations. Required Skills and Qualifications: 7+ years of experience in DevOps, Site Reliability Engineering, or Platform Engineering roles. 3+ years of hands-on Kubernetes experience, including cluster provisioning, scaling, and troubleshooting. Strong expertise in creating, optimizing, and managing Jenkins pipelines for end-to-end CI/CD. Experience in containerization and orchestration: Docker and Kubernetes . Solid experience with Terraform , Helm , and other IaC tools. Experience securing Kubernetes clusters, containers, and cloud-native applications. Scripting proficiency ( Bash , Python , or Golang preferred). Knowledge of service meshes (Istio, Linkerd) and Kubernetes ingress management. Hands-on experience with security scanning tools (e.g., Trivy , Anchore , Aqua , SonarQube ) integrated into Jenkins. Strong understanding of IAM , RBAC , and secret management systems like Vault or AWS Secrets Manager .

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role summary: Reporting to the Lead Service Reliability Engineer, the Service Reliability Engineer is part of an enablement team that provides expertise and support to specialist teams designing, developing and running customer-facing products as well as internal systems. The Service Reliability Engineer will on a day-to-day basis be responsible for the Observability of Creditsafe’s Technology estate and will be involved in the monitoring and escalation of events. A large part of the role will involve improving the monitoring system and processes. This role will involve integrating AI capabilities to reduce noise and improve incident mean time to repair. Role objectives: Ensure our products are ready for life in Production Embed reliability, observability, and supportability as features, across the lifecycle of solution development Help to guide our engineering team’s transformation Raise the bar for engineering quality Deliver higher service availability Improve Creditsafe’s Monitoring Capabilities utilizing AI technologies Personal qualities: Trustworthy and quick thinking Optimistic & Resilient; breed positivity and don’t give up on the “right thing” Leadership & Negotiation; sell not tell, build support and consensus Creativity and High standards; develop imaginative solutions without cutting corners Fully rounded; experience of dev, support, security, ops, architecture and sales As a Service Reliability Engineer, you should have: A track record of troubleshooting and resolving issues in live production environments and implementing strategies to eliminate them Experience in a technical operations support role Demonstratable knowledge of AWS CloudWatch – Creating dashboards, metrics, and log analytics Knowledge of one or more high-level programming languages such as Python, Node, C# and Shell scripting experience. Proactive Monitoring and Alert Validation - Monitor critical infrastructure and services; validate alerts by analyzing logs, performance metrics, and historical data to reduce false positives. Incident Response and Troubleshooting - Perform troubleshooting; escalate unresolved issues to appropriate technical teams; actively participate in incident management and communication. Knowledge of AI/ML frameworks and tools for building operational intelligence solutions and automating repetitive SRE tasks. Continuous Improvement – Improvement of monitoring solutions, reduction of alert noise and implementation of AI technologies: AI/ML experience in operations, including predictive analytics for system health, automated root cause analysis, intelligent alert correlation to reduce noise and false positives, and hands-on experience with AI-powered monitoring solutions for anomaly detection and automated incident response. Strong ability and enthusiasm to learn new technologies in a short time particularly emerging AI/ML technologies in the DevOps, Platform and SRE space. Proficient in container-based environments including Docker and Amazon ECS. Experience of automating infrastructure using “as code” tooling. Strong OS skills, Windows and Linux. Understanding of relational and NoSQL databases. Experience in a hybrid cloud-based infrastructure. Understanding of infrastructure services including DNS, DHCP, LDAP, virtualization, server monitoring, cloud services (Azure and AWS). Knowledge of continuous integration and continuous delivery, testing methodologies, TDD and agile development methodologies Experience using CI/CD technologies such as Terraform and Azure Dev Ops Pipel Show more Show less

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Naukri logo

Job Summary This position is responsible for the evaluation, design, operational maintenance, protection, and support of various automation initiatives ( IaC) within the WAMS ( Web Applications and Messaging Systems ) department. This position utilizes supplied specifications and standardized tools for work assignments of moderate risk, impact, complexity, and scope . An Infrastructure as Code (IaC) Engineer is responsible for automating the management and provisioning of computing infrastructure using code. This role is crucial for ensuring consistency, efficiency, and scalability in IT operations. Responsibilities: Design and Develop Automation Scripts : Create scripts and templates to automate the creation, management, and updating of IT infrastructure. Collaborate with Teams: Work closely with software development and IT operations teams to align infrastructure with business goals. Implement CI/CD Pipelines: Configure and manage continuous integration and continuous deployment (CI/CD) pipelines to ensure reliable and quick code deployment. Monitor and Troubleshoot: Continuously monitor system performance, identify issues, and implement improvements to enhance stability and security. Version Control: Manage version control systems (e.g., Git) to ensure infrastructure configurations are reproducible and scalable. Documentation and Knowledge Sharing: Maintain proper documentation and share knowledge with team members to promote best practices. Stay Updated: Keep up to date with the latest trends and technologies in the DevOps and IaC industry. Qualifications: 5 years of experience in software development, system administration, or other IT roles. Bachelors Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred Technical Skills: Proficiency in IaC tools such as Terraform, Ansible. Good working knowledge of Git repository, ADO (Azure DevOps) Good working knowledge of Digital certificates and automation Knowledge of cloud services and automation tools. Development Tools/Languages: Familiarity with tools like Java, and shell scripting. Operating Systems: RedHat Linux Soft Skills: Communication: Ability to articulate complex technical scenarios in a straightforward manner to stakeholders at various levels. Critical Thinking: Evaluation of design decisions, trade-offs, and potential future challenges. Attention to Detail: Especially crucial for analyzing system design documentation, error messages, and complex message flows. Teamwork: Ability to collaborate with architects, developers, system administrators, and other roles involved in system administration and implementation.

Posted 2 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less

Posted 2 days ago

Apply

6.0 - 11.0 years

25 - 40 Lacs

Hyderabad, Gurugram, Bengaluru

Hybrid

Naukri logo

Job Title: Senior Backend Engineer Core Java s Microservices (Multiple Positions) Overview: We are hiring for multiple backend engineering roles. Candidates must demonstrate strong capabilities in either Core Java backend engineering or Microservices and Cloud architecture, with working knowledge in the other . Candidates with strengths in both areas will be considered for senior roles. You will be part of a high-performance engineering team solving complex business problems through robust, scalable, and high-throughput systems. Core Technical Requirements Candidates must demonstrate strength in either of the following areas, with working knowledge of the other. Stronger capabilities in both will be considered for senior roles. Java s Backend Engineering Java 8+ (Streams, Lambdas, Functional Interfaces, Optionals) Spring Core, Spring Boot, object-oriented principles, exception handling, immutability Multithreading (Executor framework, locks, concurrency utilities) Collections, data structures, algorithms, time/space complexity Kafka (producer/consumer, schema, error handling, observability) JPA, RDBMS/NoSQL, joins, indexing, data modeling, sharding, CDC JVM tuning, GC configuration, profiling, dump analysis Design patterns (GoF creational, structural, behavioral) Microservices, Cloud s Distributed Systems REST APIs, OpenAPI/Swagger, request/response handling, API design best practices Spring Boot, Spring Cloud, Spring Reactive Kafka Streams, CQRS, materialized views, event-driven patterns GraphQL (Apollo/Spring Boot), schema federation, resolvers, caching Cloud-native apps on AWS (Lambda, IAM, S3, containers) API security (OAuth 2.0, JWT, Keycloak, API Gateway configuration) CI/CD pipelines, Docker, Kubernetes, Terraform Observability with ELK, Prometheus, Grafana, Jaeger, Kiali + Additional Skills (Nice to Have) Node.js, React, Angular, Golang, Python, GenAI Web platforms: AEM, Sitecore Production support, rollbacks, canary deployments TDD, mocking, Postman, security/performance test automation Architecture artifacts: logical/sequence views, layering, solution detailing Key Responsibilities Design and develop scalable backend systems using Java and Spring Boot Build event-driven microservices and cloud-native APIs Implement secure, observable, and high-performance solutions Collaborate with teams to define architecture, patterns, and standards Contribute to solution design, code reviews, and production readiness Troubleshoot, optimize, and monitor distributed systems in production Mentor junior engineers (for senior roles)

Posted 2 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Years of exp : 10 - 15 yrs Location : Noida Join us as Cloud Engineer at Dailoqa , where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as Cloud Engineer you should have experience with: Cloud sourcing, networks, VMs, performance, scaling, availability, storage, security, access management Deep expertise in one or more cloud platforms: AWS, Azure, GCP Strong experience in containerization and orchestration (Docker, Kubernetes, Helm) Familiarity with CI/CD tools: GitHub Actions, Jenkins, Azure DevOps, ArgoCD, etc. Proficiency in scripting languages (Python, Bash, PowerShell) Knowledge of MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML Strong understanding of DevOps principles applied to ML workflows. Key Responsibilities may include: Design and implement scalable, cost-optimized, and secure infrastructure for AI-driven platforms. Implement infrastructure as code using tools like Terraform, ARM, or Cloud Formation. Automate infrastructure provisioning, CI/CD pipelines, and model deployment workflows. Ensure version control, repeatability, and compliance across all infrastructure components. Set up monitoring, logging, and alerting frameworks using tools like Prometheus, Grafana, ELK, or Azure Monitor. Optimize performance and resource utilization of AI workloads including GPU-based training/inference Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e. g. , FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: DevOps Lead Location: Noida (preferred), Mumbai Job Type: Full-Time Experience Level: 8+ years (with 2–3 years in a DevOps leadership role) Role Overview: We are looking for an experienced and proactive DevOps Lead to take ownership of our cloud-based infrastructure. This role is critical in shaping and maintaining scalable, secure, and high-performance environments for our applications. The ideal candidate will have deep technical expertise across cloud platforms, strong leadership skills, and a passion for automation, cost optimization, and DevOps best practices. You will work closely with development teams to define deployment architectures, implement robust CI/CD pipelines, and lead cloud transformation and migration initiatives. You will develop, and lead, a team of DevOps engineers and be a key contributor in client- facing technical pre-sales conversations. Key Responsibilities:  Own and manage all aspects of cloud infrastructure across multiple environments (development, staging, production).  Collaborate with development and architecture teams to define and implement deployment architectures.  Design and maintain CI/CD pipelines, infrastructure-as-code (IaC), and container orchestration platforms (e.g., Kubernetes).  Lead cloud transformation and migration projects, from planning and solutioning to execution and support.  Set up and enforce DevOps best practices, security standards, and operational procedures.  Monitor infrastructure health, application performance, and resource utilization, with a focus on automation and optimization.  Identify and implement cost-saving measures across cloud and infrastructure setups.  Manage and mentor a team of DevOps engineers, driving continuous improvement and technical growth.  Collaborate with business and technical stakeholders in pre-sales engagements, providing input on infrastructure design, scalability, and cost estimation.  Maintain up-to-date documentation and ensure knowledge sharing within the team. Required Skills:  8+ years of experience in DevOps, cloud infrastructure, and system operations.  2+ years in a leadership or team management role within DevOps or SRE.  Exposure to DevSecOps practices and tools.  Experience with multi-cloud or hybrid cloud environments.  Strong experience with cloud platforms (AWS, Azure, or GCP) and cloud-native architectures.  Proven track record in cloud migration and cloud transformation projects.  Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Pulumi.  Deep knowledge of CI/CD tools (e.g., GitLab CI, Jenkins, GitHub Actions) and configuration management tools (e.g., Ansible, Chef).  Hands-on experience with containerization (Docker) and orchestration (Kubernetes, ECS).  Strong scripting and automation skills (e.g., Python, Bash, PowerShell).  Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK, CloudWatch).  Deep understanding of security, compliance, and cost optimization in cloud environments.  Excellent communication and documentation skills.  Experience in supporting technical pre-sales and creating infrastructure proposals or estimations.  Ability to work in client-facing roles, including technical discussions and solution presentations. Qualifications:  Bachelor’s or master’s degree in computer science, Engineering, or a related field.  Cloud certifications such as: o AWS Certified DevOps Engineer or Solutions Architect o Microsoft Certified: DevOps Engineer Expert o Google Cloud DevOps Engineer Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

JD: Job Title: Devops engineer Experience: 5 Years Location: Pune Lead time : Immediate to 15 days Skills/ Experience Required: Minimum of 5 years of Hands-on experience working as a DevOps/Build/Deployment and Engineering & Operations skills Primary Tech Stack: GitLab Pipelines/Github Actions/Jenkins, Gitlab/Github, Terraform, Helm, AWS, Oracle DB Experience with Agile/Scrum, Continuous Integration, Continuous Delivery, and related tools Hands-on experience in production environments, both deploying and troubleshooting applications in Linux environment. Strong experience automating with scripting languages such as Bash, Python, Groovy and any deployment scripting languages Strong experience with CI/CD deployment supporting Java technologies (Jenkins, Nexus, Apache, JBoss, Tomcat) Highly Proficient in Configuration Management (Ansible, Chef or Similar) Hands-on experience with Containerization, Docker & Kubernetes is required. Good understanding of Micro-services architecture, design patterns, and standard methodologies Good understanding of networking, load balancing, caching, security, config and certificate management. Nice to Have Skills Experience with some of key AWS services (IAM, VPC, Lambda, EKS, MSK, Keyspace, Codepipeline) Experience with Java ecosystem (Maven, Ant, Tomcat, JBoss) Experience with Node ecosystem (JavaScript, Angular, Npm, JQuery.) Understanding of SOA and distributed computing Experience with Test Driven Development (TDD) practices with an automated testing framework Experience with Istio Experience SQL Server Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Position Summary Job title: Azure Cloud Security Engineer (Senior Consultant) About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk Management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and RPA to solve Deloitte’s clients ‘most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Cyber & Strategic Risk We help organizations create a cyber-minded culture, reimagine risk to uncover strategic opportunities, and become faster, more innovative, and more resilient in the face of ever-changing threats. We provide intelligence and acuity that dynamically reframes risk, transcending a manual, reactive paradigm. The cyber risk services—Identity & access management (IAM) practice helps organizations in designing, developing, and implementing industry-leading IAM solutions to protect their information and confidential data, as well as help them build their businesses and supporting technologies to be more secure, vigilant, and resilient. The IAM team delivers service to clients through following key areas: User provisioning Access certification Access management and federation Entitlements management Work you’ll do As a Cloud Security Engineer, you will be at the front lines with our clients supporting them with their Cloud Cyber Risk needs: Executing on cloud security engagements across the lifecycle – assessment, strategy, design, implementation, and operations. Performing technical health checks for cloud platforms/environments prior to broader deployments. Assisting in the selection and tailoring of approaches, methods and tools to support cloud adoption, including for migration of existing workloads to a cloud vendor. Designing and developing cloud-specific security policies, standards and procedures. e.g., user account management (SSO, SAML), password/key management, tenant management, firewall management, virtual network access controls, VPN/SSL/IPSec, security incident and event management (SIEM), data protection (DLP, encryption). Documenting all technical issues, analysis, client communication, and resolution. Supporting proof of concept and production deployments of cloud technologies. Assisting clients with transitions to cloud via tenant setup, log processing setup, policy configuration, agent deployment, and reporting. Operating across both technical and management leadership capacities. Providing internal technical training to Advisory personnel as needed. Performing cloud orchestration and automation (Continuous Integration and Continuous Delivery (CI/CD)) in single and multi-tenant environments using tools like Terraform, Ansible, Puppet, Chef, Salt etc. Experience with multiple security technologies like CSPM, CWPP, WAF, CASB, IAM, SIEM, etc. Required Skills 4+ years of information technology and/or information security operations experience. Ideally 2+ years of working with different Cloud platforms (SaaS, PaaS, and IaaS) and environments (Public, Private, Hybrid). Familiarity with the following will be considered a plus: Solid understanding of enterprise-level directory and system configuration services (Active Directory, SCCM, LDAP, Exchange, SharePoint, M365) and how these integrate with cloud platforms Solid understanding of cloud security industry standards such as Cloud Security Alliance (CSA), ISO/IEC 27017 and NIST CSF and how they help in compliance for cloud providers and cloud customers Hands-on technical experience implementing security solutions for Microsoft Azure Knowledge of cloud orchestration and automation (Continuous Integration and Continuous Delivery (CI/CD)) in single and multi-tenant environments using tools like Terraform, Ansible, Puppet, Chef, Salt etc. Knowledge of cloud access security broker (CASB) and cloud workload protection platform (CWPP) technologies Solid understanding of OSI Model and TCP/IP protocol suite and network segmentation principles and how these can be applied on cloud platforms Preferred: Previous Consulting or Big 4 experience. Hands-on experience with Azure, plus any CASB or CWPP product or service. Understanding of Infrastructure-as-Code, and ability to create scripts using Terraform, ARM, Ansible etc. Knowledge of scripting languages (PowerShell, JSON, .NET, Python, Javascript etc.) Qualification Bachelor’s Degree required.Ideally in Computer Science, Cyber Security, Information Security, Engineering, Information Technology. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. Deloitte is committed to achieving diversity within its workforce, and encourages all qualified applicants to apply, irrespective of gender, age, sexual orientation, disability, culture, religious and ethnic background. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with Deloitte’s clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips Finding the right job and preparing for the recruitment process can be tricky. Check out tips from our Deloitte recruiting professionals to set yourself up for success. Check out recruiting tips from Deloitte recruiters . Benefits We believe that to be an undisputed leader in professional services, we should equip you with the resources that can make a positive impact on your well-being journey. Our vision is to create a leadership culture focused on the development and well-being of our people. Here are some of our benefits and programs to support you and your family’s well-being needs. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you . Our people and culture Our people and our culture make Deloitte a place where leaders thrive. Get an inside look at the rich diversity of background, education, and experiences of our people. What impact will you make? Check out our professionals’ career journeys and be inspired by their stories. Professional development You want to make an impact. And we want you to make it. We can help you do that by providing you the culture, training, resources, and opportunities to help you grow and succeed as a professional. Learn more about our commitment to developing our people . © 2023. See Terms of Use for more information. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 301427 Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

India

On-site

Linkedin logo

Job Description Does an opportunity to build solutions for large scale suit you? Do hybrid local/cloud infrastructures interest you? Join our Engineering team This team is a part of the Cloud Security Intelligence group. Together, we own one of the largest Big Data environments in Israel. The team owns various Intelligence Security products that run as part of this environment. We are also responsible for innovatively developing and maintaining the platform itself. Make a difference in your own way You'll be working on innovating and developing a new and ground-breaking Big Data platform. It provides services for the rest of the Platform and Akamai engineering groups. We strive to accelerate development, reduce operational costs, and provide common secured common services As a Software Engineer II-DevOps, you will be responsible for: Designing and implementing infrastructure solutions on top of Azure and Linode - Kubernetes Kafka, vault, storage, etc. Developing and Provisioning infrastructure applications, and monitoring tools e.g. OpenSearch/ELK, OpenTelematry, Prometheus, Grafana, Pushgatway & etc. Building and maintaining CI/CD pipelines using Jenkins, In addition to building GitOps solutions such as ArgoCD Working in all stages of the software release process in all development and production environments Do What You Love To be successful in this role you will: Have 2+ years' experience as a DevOps Engineer and Bachelor's degree in Computer Science or it's equivalent Be proficient in working in Linux/Unix environments, and demonstrate solid experience in Python and shell scripting Have experience in Infrastructure as Code (IaC) using Terraform, managing/deploying applications using Helm charts in Kubernetes environments. Have proven experience in designing and implementing solutions for Kubernetes Have experience setting up large-scale container technology (Docker, Kubernetes, etc.) and migrating/creating systems on cloud environments (Azure/AWS/GCP) Be responsible, self-motivated, and able to work with little or no supervision and have attention to detail Work in a way that works for you FlexBase, Akamai's Global Flexible Working Program, is based on the principles that are helping us create the best workplace in the world. When our colleagues said that flexible working was important to them, we listened. We also know flexible working is important to many of the incredible people considering joining Akamai. FlexBase, gives 95% of employees the choice to work from their home, their office, or both (in the country advertised). This permanent workplace flexibility program is consistent and fair globally, to help us find incredible talent, virtually anywhere. We are happy to discuss working options for this role and encourage you to speak with your recruiter in more detail when you apply. Learn what makes Akamai a great place to work Connect with us on social and see what life at Akamai is like! We power and protect life online, by solving the toughest challenges, together. At Akamai, we're curious, innovative, collaborative and tenacious. We celebrate diversity of thought and we hold an unwavering belief that we can make a meaningful difference. Our teams use their global perspectives to put customers at the forefront of everything they do, so if you are people-centric, you'll thrive here. Working for you Benefits At Akamai, we will provide you with opportunities to grow, flourish, and achieve great things. Our benefit options are designed to meet your individual needs for today and in the future. We provide benefits surrounding all aspects of your life: Your health Your finances Your family Your time at work Your time pursuing other endeavors Our benefit plan options are designed to meet your individual needs and budget, both today and in the future. About Us Akamai powers and protects life online. Leading companies worldwide choose Akamai to build, deliver, and secure their digital experiences helping billions of people live, work, and play every day. With the world's most distributed compute platform from cloud to edge we make it easy for customers to develop and run applications, while we keep experiences closer to users and threats farther away. Join us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will energize and inspire you! Akamai Technologies is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of gender, gender identity, sexual orientation, race/ethnicity, protected veteran status, disability, or other protected group status. Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

***Immediate requirement*** Job Title: AWS Architect Salary Range: 20 - 25 LPA No. of years of experience: 7+ years Job Type: Contract Contract Duration: 6-12 months (potential to extend or convert to permanent) Location: Pan India Work Type: Remote Start Date: Immediate (Notice period/joining within 1-2 weeks) **Apply only if you can join within 1-2 weeks** Strong knowledge of AWS services like EC2, S3, RDS, Lambda, VPC, CloudWatch, etc. Proficiency in designing scalable and distributed systems. Hands-on experience with IaC tools (CloudFormation, Terraform). Knowledge of networking, security , and data management in cloud. Familiarity with DevOps practices , Docker, Kubernetes, and CI/CD pipelines. AWS certifications such as AWS Certified Solutions Architect – Associate/Professional are highly preferred. Show more Show less

Posted 2 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: React + AWS DevOps Location: Any PAN India Location - Hybrid working model Experience: 6+ years Key Focus: React.js, TypeScript, AWS Integration, and DevOps CI/CD Job Summary: We are seeking a skilled Senior Frontend Engineer with expertise in React.js, TypeScript, and AWS to build high-performance, scalable web applications. The ideal candidate will have strong experience in modern frontend development, CI/CD pipelines (Jenkins), and cloud integration (AWS IaC). You will work closely with cross-functional teams to deliver seamless, responsive, and secure user interfaces. Key Responsibilities: ✅ Frontend Development: Develop and maintain high-performance web applications using React.js (Functional Components + Hooks). Write clean, modular, and maintainable code in TypeScript. Implement state management (Redux, Context API) and optimize rendering performance. Ensure responsive design (CSS3, Flexbox/Grid) and cross-browser compatibility. ✅ DevOps & CI/CD: Set up and manage CI/CD pipelines using Jenkins. Automate deployments and testing in AWS environments. Work with Infrastructure as Code (IaC) for frontend deployment. ✅ Cloud & AWS Integration: Deploy and manage frontend apps on AWS (S3, CloudFront, Lambda@Edge). Integrate with backend APIs (REST/GraphQL) and serverless functions (AWS Lambda). Implement security best practices (JWT, OAuth, CSP). ✅ Testing & Quality: Write unit/integration tests (Jest, React Testing Library, Cypress). Ensure code quality through peer reviews, linting (ESLint), and static analysis. ✅ Collaboration & Agile: Work in Agile/Scrum with cross-functional teams (UX, Backend, DevOps). Participate in code reviews, sprint planning, and technical discussions. Must-Have Qualifications: 6+ years of React.js development (v16+). Strong TypeScript proficiency (mandatory). Experience with CI/CD pipelines (Jenkins, GitHub Actions, or similar). AWS cloud integration (S3, CloudFront, Lambda, IaC – Terraform/CDK). State management (Redux, Zustand, or Context API). Testing frameworks (Jest, React Testing Library, Cypress). Performance optimization (React.memo, lazy loading, code splitting). Fluent English & collaboration skills. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Position Summary Job title: Azure Cloud Security Engineer (Senior Consultant) About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk Management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and RPA to solve Deloitte’s clients ‘most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Cyber & Strategic Risk We help organizations create a cyber-minded culture, reimagine risk to uncover strategic opportunities, and become faster, more innovative, and more resilient in the face of ever-changing threats. We provide intelligence and acuity that dynamically reframes risk, transcending a manual, reactive paradigm. The cyber risk services—Identity & access management (IAM) practice helps organizations in designing, developing, and implementing industry-leading IAM solutions to protect their information and confidential data, as well as help them build their businesses and supporting technologies to be more secure, vigilant, and resilient. The IAM team delivers service to clients through following key areas: User provisioning Access certification Access management and federation Entitlements management Work you’ll do As a Cloud Security Engineer, you will be at the front lines with our clients supporting them with their Cloud Cyber Risk needs: Executing on cloud security engagements across the lifecycle – assessment, strategy, design, implementation, and operations. Performing technical health checks for cloud platforms/environments prior to broader deployments. Assisting in the selection and tailoring of approaches, methods and tools to support cloud adoption, including for migration of existing workloads to a cloud vendor. Designing and developing cloud-specific security policies, standards and procedures. e.g., user account management (SSO, SAML), password/key management, tenant management, firewall management, virtual network access controls, VPN/SSL/IPSec, security incident and event management (SIEM), data protection (DLP, encryption). Documenting all technical issues, analysis, client communication, and resolution. Supporting proof of concept and production deployments of cloud technologies. Assisting clients with transitions to cloud via tenant setup, log processing setup, policy configuration, agent deployment, and reporting. Operating across both technical and management leadership capacities. Providing internal technical training to Advisory personnel as needed. Performing cloud orchestration and automation (Continuous Integration and Continuous Delivery (CI/CD)) in single and multi-tenant environments using tools like Terraform, Ansible, Puppet, Chef, Salt etc. Experience with multiple security technologies like CSPM, CWPP, WAF, CASB, IAM, SIEM, etc. Required Skills 4+ years of information technology and/or information security operations experience. Ideally 2+ years of working with different Cloud platforms (SaaS, PaaS, and IaaS) and environments (Public, Private, Hybrid). Familiarity with the following will be considered a plus: Solid understanding of enterprise-level directory and system configuration services (Active Directory, SCCM, LDAP, Exchange, SharePoint, M365) and how these integrate with cloud platforms Solid understanding of cloud security industry standards such as Cloud Security Alliance (CSA), ISO/IEC 27017 and NIST CSF and how they help in compliance for cloud providers and cloud customers Hands-on technical experience implementing security solutions for Microsoft Azure Knowledge of cloud orchestration and automation (Continuous Integration and Continuous Delivery (CI/CD)) in single and multi-tenant environments using tools like Terraform, Ansible, Puppet, Chef, Salt etc. Knowledge of cloud access security broker (CASB) and cloud workload protection platform (CWPP) technologies Solid understanding of OSI Model and TCP/IP protocol suite and network segmentation principles and how these can be applied on cloud platforms Preferred: Previous Consulting or Big 4 experience. Hands-on experience with Azure, plus any CASB or CWPP product or service. Understanding of Infrastructure-as-Code, and ability to create scripts using Terraform, ARM, Ansible etc. Knowledge of scripting languages (PowerShell, JSON, .NET, Python, Javascript etc.) Qualification Bachelor’s Degree required.Ideally in Computer Science, Cyber Security, Information Security, Engineering, Information Technology. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. Deloitte is committed to achieving diversity within its workforce, and encourages all qualified applicants to apply, irrespective of gender, age, sexual orientation, disability, culture, religious and ethnic background. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with Deloitte’s clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips Finding the right job and preparing for the recruitment process can be tricky. Check out tips from our Deloitte recruiting professionals to set yourself up for success. Check out recruiting tips from Deloitte recruiters . Benefits We believe that to be an undisputed leader in professional services, we should equip you with the resources that can make a positive impact on your well-being journey. Our vision is to create a leadership culture focused on the development and well-being of our people. Here are some of our benefits and programs to support you and your family’s well-being needs. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you . Our people and culture Our people and our culture make Deloitte a place where leaders thrive. Get an inside look at the rich diversity of background, education, and experiences of our people. What impact will you make? Check out our professionals’ career journeys and be inspired by their stories. Professional development You want to make an impact. And we want you to make it. We can help you do that by providing you the culture, training, resources, and opportunities to help you grow and succeed as a professional. Learn more about our commitment to developing our people . © 2023. See Terms of Use for more information. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 301427 Show more Show less

Posted 2 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies