Jobs
Interviews

17543 Terraform Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

One of our prestigious clients, a TOP MNC Giant with a global presence, is currently seeking a Lead Enterprise Architect to join their team in Pune, Mumbai, or Bangalore. **Qualifications and Certifications:** **Education:** - Bachelors or masters degree in Computer Science, Information Technology, Engineering, or a related field. **Experience:** - A minimum of 10+ years of experience in data engineering, with at least 4 years of hands-on experience with GCP cloud platforms. - Proven track record in designing and implementing data workflows using GCP services such as BigQuery, Dataform Cloud Dataflow, Cloud Pub/Sub, and Cloud Composer. **Certifications:** - Google Cloud Professional Data Engineer certification is preferred. **Key Skills:** **Mandatory Skills:** - Advanced proficiency in Python for developing data pipelines and automation. - Strong SQL skills for querying, transforming, and analyzing large datasets. - Hands-on experience with various GCP services including Cloud Storage, Dataflow, Cloud Pub/Sub, Cloud SQL, BigQuery, Dataform, Compute Engine, and Kubernetes Engine (GKE). - Familiarity with CI/CD tools like Jenkins, GitHub, or Bitbucket. - Proficiency in Docker, Kubernetes, Terraform, or Ansible for containerization, orchestration, and infrastructure as code (IaC). - Knowledge of workflow orchestration tools such as Apache Airflow or Cloud Composer. - Strong understanding of Agile/Scrum methodologies. **Nice-to-Have Skills:** - Experience with other cloud platforms like AWS or Azure. - Familiarity with data visualization tools such as Power BI, Looker, or Tableau. - Understanding of machine learning workflows and their integration with data pipelines. **Soft Skills:** - Strong problem-solving and critical-thinking abilities. - Excellent communication skills to effectively collaborate with both technical and non-technical stakeholders. - Proactive attitude towards innovation and continuous learning. - Ability to work independently and as part of a collaborative team. If you are interested in this opportunity, please reply back with your updated CV and provide the following details: - Total Experience: - Relevant experience in Data Engineering: - Relevant experience in GCP cloud platforms: - Relevant experience as an Enterprise Architect: - Availability to join ASAP: - Preferred location (Pune / Mumbai / Bangalore): We will contact you once we receive your CV along with the above-mentioned details. Thank you, Kavita.A,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

OakNorth is a profitable business that has supported the growth of thousands of businesses. We help entrepreneurs scale quickly, realise their ambitions and make data-driven decisions. We're looking for engineers who are particularly passionate about data analytics and data engineering to join our team. You'd use both your generalist and specialist skills to better our products and our team. You'd join our data platform squad as an immediate contributor. As an Analytics Engineer, you will work with our Finance teams to transform raw data into meaningful insights using tools like DBT, BigQuery, and Tableau. You should have 4-8 years of relevant hands-on experience and be proficient in developing and maintaining data models using DBT. Your responsibilities will include writing and optimizing SQL queries to transform raw data into structured formats, developing and maintaining interactive dashboards and reports in Tableau, collaborating with stakeholders to gather requirements and translate them into analytical solutions, and working well cross-functionally while earning trust from co-workers at all levels. You should deeply care about mentorship and growing your colleagues, prefer simple solutions and designs over complex ones, enjoy working with a diverse group of people with different areas of expertise, challenge the existing approach when necessary, and be organized amidst chaos. You should also be a broad thinker with the capability to see the potential impact of decisions across the wider business. In our cross-functional, mission-driven, and autonomous squads, you will have the opportunity to work on specific user and business problems. Initially, you will be upskilling within the Data Platform squad, which looks after all internal data products and the data warehouse, driving the bank's data strategy with various exciting greenfield projects. Our technology stack includes Python, DBT, Tableau, PostgreSQL, BigQuery, MySQL, pytest, AWS, GCP, Docker, Terraform, GitHub, and GIT. We are pragmatic about our technology choices and focus on outcomes over outputs to solve user problems that translate to business results. We expect you to collaborate effectively, focus on continuous improvement, seek to understand our users, embrace continuous deployment, test outside-in, practice DevOps culture, and be cloud-native. Your behaviors at work should reflect and actively sustain a healthy engineering environment where a wide range of voices are heard, teams are happy and engaged, safety to have an opinion is perceived, and egos are left behind. At OakNorth Bank, we empower entrepreneurs to realise their ambitions, understand their markets, and apply data intelligence to scale successfully at pace. We believe in barrier-free banking and strive to create an inclusive and diverse workplace where everyone can thrive. Join us in our mission to revolutionize the banking industry and empower businesses to thrive.,

Posted 1 day ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

JOB DESCRIPTION This position is accountable for the support of RTDS infrastructure and applications. The primary responsibilities include deploying and overseeing real-time software and infrastructure on the cloud/on-prem, ensuring system maintenance to optimize performance and uptime for end-users, and upholding a secure environment. The role also involves designing and implementing new technologies to meet client requirements. As an advanced technical position, it demands expert judgment and analysis in the design, development, and implementation of technical products and systems. The incumbent will be adept at resolving highly complex technical issues, conducting advanced research, and recommending process improvements to enhance product quality. Familiarity with various concepts, practices, and procedures in the field is essential. The successful candidate will leverage their extensive experience and sound judgment to effectively plan and achieve organizational goals. RESPONSIBILITIES SAFETY, SECURITY & COMPLIANCE Maintains the highest standards of corporate governance, ensuring that all activities are conducted ethically and in compliance with Company’s Security, Compliance & HSE policies, Management System, relevant laws, regulations, standards, and industry practices and complies with the Company’s Rules to Live By Places Quality, Health & Safety, Security, and protection of the Environment as core values while never intentionally placing employees, our processes, customers, or the communities in which we live and work at risk Seeks continual improvement in Health, Safety, Security & protection of the Environment, taking into account responsible care, process vulnerabilities, public, customer and employee inputs, knowledge and technology and best business practices to exceed customer expectations Supervisors & Managers should demonstrate effective safety leadership for the health and safety arrangements of all subordinates and for any persons visiting them while on the Company premises QUALITY Work to maintain ISO27001:2022 certification: Monthly data and reports for ISMS Committee Meetings Audit support Follows and enforces Processes and Procedures Responsible for maintaining the infrastructure including back-ups and Disaster Recovery Responsible for being familiar with the Company's Quality policies and takes an active role in the compliance and improvement of Weatherford’s Management System Maintains service quality as immediate priorities when working across all areas of the business and continually seeks areas for improvement. OPERATIONS Work on a support team on day-to-day basis with Proactive and results oriented approach, provide technical assistance. Assist deployment team members on designing the best possible real time solution for clients on rig, on client site or in data center and on Cloud. Ensure that the architecture and infrastructure on which the application will be deployed are robust and stable. Design and architect production systems. Re-engineering production systems when new technologies become available to increase performance and reliability. Follow deployment plan and schedules, ensuring alignment with project timelines and objective. Ensure the product/application has been correctly and completely integrated across the program. Validate that the product has been correctly packaged before deployment and ensure that all release controls have been satisfied. Follow deployment plans and schedules, ensuring alignment with project timelines and objectives. Coordinate deployment activities with stakeholders, including operations teams, system administrators, and third-party vendors. Conduct pre-deployment testing to identify and address potential issues, ensuring smooth integration with existing systems. Troubleshoot deployment issues and implement corrective actions in a timely manner to minimize downtime. Provide technical guidance and support, ensuring adherence to best practices and standards. Document deployment processes, configurations, and procedures for future reference and knowledge sharing. Continuously evaluate and improve deployment workflows to optimize efficiency, scalability, and reliability. Stay updated on industry trends and emerging technologies related to real-time systems deployment. Identify and address all security concerns/incidents. Participate in training on IT and software components of real time systems. Resolve all issues escalated by IT and Operations teams and escalate if needed. Provide detailed KPI reports as required for management. COMMUNICATION Know and understand Weatherford Quality Policy and comply with all requirements of the Quality Systems Manual, Operating and Technical Procedures and Workplace Instructions. Maintains effective communications with all key stakeholders both internal and where appropriate external. FINANCIAL All employees have an accountability to the organization to be financially responsible whether they oversee a function budget or simply their own expenses. Costs incurred should be within approved budget, processed within agreed time frames & following the relevant financial policy and procedure. QUALIFICATIONS Experience & Education Required Minimum 7-12+ years related experience Must have Engineering Degree in Computer OR Information Technology Certifications in relevant technologies (e.g., AWS Certified DevOps Engineer, Kubernetes Certified Administrator). Experience with continuous integration/continuous deployment (CI/CD) pipelines and automated testing frameworks. Knowledge of cybersecurity principles and best practices for securing real-time systems. Familiarity with monitoring and logging tools for performance optimization and troubleshooting. Preferred Knowledge of ISO27001:2022 requirements: Audit support Follows and enforces Processes and Procedures Responsible for maintaining the infrastructure including back-ups and Disaster Recovery Knowledge, Skills & Abilities. Required Knowledge, Skills & Abilities: Experience in IT Infrastructure services, worked with multiple technologies and involved in support and implementation for IT infrastructure related projects and given remote support to clients Identify and implement backup and disaster recovery solutions for mission critical data and applications Demonstrated high level of responsibility for researching, purchasing and configuring any equipment related to Information Technology Interface extensively with Top-tier management, staff, peers, users and other business partners Strong problem solving and analytical abilities and strong written/verbal communication skills Cloud Computing: Proficiency in cloud platforms like AWS, Azure, or Google Cloud for deploying and managing real-time applications. DevOps Tools: Experience with tools like Docker, Kubernetes, Ansible, or Terraform for containerization, orchestration, and automation of deployment processes. Continuous Integration/Continuous Deployment (CI/CD): Knowledge of CI/CD pipelines to automate testing, building, and deploying code changes rapidly and reliably. Networking: Understanding of networking concepts and protocols for configuring and optimizing real-time communication systems. Monitoring and Logging: Ability to set up monitoring tools like Prometheus, Grafana, or ELK stack for tracking system performance and troubleshooting issues in real-time. Security: Knowledge of security best practices for real-time systems, including encryption, authentication, and access control mechanisms. Scripting and Automation: Proficiency in scripting languages like Python, Bash, or PowerShell for automating deployment tasks and managing infrastructure as code. Database Management: Understanding of databases like Microsoft SQL, PostgresDB, MongoDB, Redis for storing and processing real-time data. Version Control: Experience with version control systems like Git for managing code changes and collaborating with team members effectively. Problem-Solving Skills: Ability to troubleshoot complex issues quickly and effectively in a real-time environment. Team Collaboration: Strong communication and collaboration skills to work effectively with cross-functional teams and stakeholders. Worked on ITSM/ITIL Practices/Processes (understand ITIL concepts – however, ITIL certification is not required) Familiarity with Load Balancing concepts, Server Clustering, hypervisor technology (Hyper-V, VMware) Good understanding on High Availability tools (SQL FCI, Cluster, Mirroring etc.) Strong Delegation, Time management, Conflict resolution skills and proven experience of leading a team of 6-10 personnel with diverse experiences. Windows Technologies – Windows OS, Active Directory, MSSQL, IIS, Clustering, Load Balancing, WSUS, MDT Cloud Infrastructure Management Monitoring/alerting Technologies – Victorops, Whatsup Gold, Sensu, Grafana, Statuscake Use Tools/apps like Git, bitbucket, confluence, Jira for source control, document and project sharing Networking Technologies – DNS and DHCP Servers, TCP/IP protocol suite Virtualization Technologies Ticketing Systems – Zendesk, Jira, DevOps Travel Requirement: This role may require domestic and potentially international travel of up to:

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Oracle Hyperion EPM Infrastructure Administrator responsible for managing the day-to-day operations, end-user support, and project initiatives across the organization's EPM platform. Your role involves collaborating with functional teams to address complex business issues and design scalable system solutions. Your key responsibilities include analyzing and resolving functional and operational issues reported by customers within the Oracle EPM/Hyperion environment, managing operations and end-user support of the Hyperion EPM 11x platform, troubleshooting integration and data issues, coordinating operational handover with global support teams, monitoring tasks and procedures, working with technical infrastructure teams, participating in system testing, data validations, and stress testing, as well as creating technical documentation. Mandatory qualifications for this role include hands-on experience with Oracle Hyperion suite of products, knowledge of Hyperion/EPM versions, experience with installation, upgrade, and migration of Hyperion applications, strong understanding of Windows Server, Active Directory, network technologies, Oracle database, and WebLogic application server. Excellent communication skills and the ability to work effectively in a customer-oriented environment are essential. Good-to-have qualifications include knowledge of ServiceNow, OAC-Essbase, Essbase 19c and 21c, DRM, EPMA to DRM migration, experience with cloud EPM solutions, knowledge of Ansible, Terraform, and Linux platform. Self-assessment questions to consider include your experience with Oracle Hyperion suite of products, application of this knowledge in previous roles, experience in migrating on-premise Hyperion applications to Oracle Cloud Infrastructure (OCI), and staying updated on new technologies and trends in Enterprise Performance Management. This position is categorized at Career Level - IC4. Oracle is a world leader in cloud solutions, committed to innovation and inclusivity. The organization offers global opportunities, competitive benefits, flexible medical, life insurance, retirement options, and supports community involvement through volunteer programs. Accessibility assistance or accommodation for disabilities can be requested by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a skilled Full Stack Developer with 4-6 years of hands-on experience, proficient in React.js for front-end development and Node.js for back-end development. Your strong backend experience includes RESTful API development and familiarity with AWS Lambda, API Gateway, DynamoDB, and S3 among other AWS services. You have prior experience integrating and automating workflows for SDLC tools such as JIRA, Jenkins, GitLab, Bitbucket, GitHub, and SonarQube. Your understanding of OAuth2, SSO, and API key-based authentications is solid. Additionally, you are familiar with CI/CD pipelines, microservices, and event-driven architectures. Your knowledge of Git and modern development practices is strong, and you possess good problem-solving skills enabling you to work independently. Experience with Infrastructure-as-Code tools like Terraform or CloudFormation is a plus. It would be beneficial if you have experience with AWS EventBridge, Step Functions, or other serverless orchestration tools, as well as knowledge of enterprise-grade authentication methods such as LDAP, SAML, or Okta. Familiarity with monitoring/logging tools like CloudWatch, ELK, or DataDog will also be advantageous in this role.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a highly skilled MLOps Engineer to design, deploy, and manage machine learning pipelines in Google Cloud Platform (GCP). Your responsibilities will include automating ML workflows, optimizing model deployment, ensuring model reliability, and implementing CI/CD pipelines for ML systems. You will collaborate with technical teams to develop cutting-edge machine learning systems that drive business value. In this role, you will manage the deployment and maintenance of machine learning models in production environments, ensuring seamless integration with existing systems. You will monitor model performance using metrics such as accuracy, precision, recall, and F1 score, addressing issues like performance degradation, drift, or bias. Troubleshooting problems, maintaining documentation, and managing model versions for audit and rollback will be part of your routine tasks. Your duties will also involve analyzing monitoring data proactively to identify potential issues and providing regular performance reports to stakeholders. Additionally, you will focus on optimizing queries and pipelines, as well as modernizing applications when necessary. To qualify for this role, you should have expertise in programming languages like Python and SQL, along with a solid understanding of best MLOps practices for deploying enterprise-level ML systems. Familiarity with Machine Learning concepts, models, and algorithms, such as regression, clustering, and neural networks, including deep learning and transformers, is essential. Experience with GCP tools like BigQueryML, Vertex AI Pipelines, Model Versioning & Registry, Cloud Monitoring, and Kubernetes is preferred. Strong communication skills, both written and oral, are crucial, as you will be required to prepare detailed technical documentation for new and existing applications. You should demonstrate strong ownership and collaborative qualities in your domain, taking the initiative to identify and drive opportunities for improvement and process streamlining. A Bachelor's Degree in a quantitative field or equivalent job experience is required for this position. Experience in Azure MLOPS, familiarity with Cloud Billing, setting up or supporting NLP, Gen AI, LLM applications with MLOps features, and working in an Agile environment are considered bonus qualifications. If you are passionate about MLOps, have a knack for problem-solving, and enjoy working in a collaborative environment to deliver innovative machine learning solutions, we would like to hear from you.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be joining our team at Prismberry as a DevOps Engineer, where your confidence, curiosity, and straightforwardness will be valued in our empowering and driven culture. Your determination and clear communication skills will play a key role in making our cloud-based solutions sharp and easy to understand. We are looking for a candidate who can solve problems effortlessly and effectively in the cloud environment. If you are excited about contributing to a team that values clarity, confidence, and productivity through cloud technology, then this opportunity is for you. As a DevOps Engineer at Prismberry, your primary responsibilities will include deploying, automating, maintaining, and managing an AWS production system. You will be tasked with installing, programming, maintaining, and overseeing the AWS production environment to ensure scalability, security, and reliability. Utilizing system troubleshooting and problem-solving methodologies, you will resolve issues across various application domains and platforms. Additionally, you will automate operational procedures through tool design, upkeep, and management. You will also lead the engineering and operational support for all Cloud and Enterprise installations, as well as drive platform security initiatives by collaborating with the core engineering team. Providing CI/CD and IAC standards, norms, and guidelines for teams to adhere to will also be part of your role. This position is for an Automation Lead Architect on a permanent basis with a standard 9 working hours schedule. The location for this role is Noida, with a 5-day working week requirement. The ideal candidate should have a minimum of 5 years of experience and hold a Bachelor's degree in Computer Science, Engineering, or a related field. Key skills and experience required for this role include: - Strong experience as a DevOps Engineer with expertise in AWS services - Solid experience in cloud-based operations at scale - Strong experience with AWS products/services and Kubernetes - Proficiency in Python development - Experience developing Infrastructure as Code in Ansible, CloudFormation, Terraform, etc. - Familiarity with CI/CD solutions like Argo-CD, GitHub Actions, and Jenkins - Experience with monitoring technologies such as CloudWatch, Datadog, and OpsGenie - Knowledge of common relational databases like MySQL and PostgreSQL - Proficiency in developing in a modern language like Go, Rust, or Python - Good understanding of event messaging, networking, and distributed systems - Ability to work independently and thrive in a learning-intensive environment.,

Posted 1 day ago

Apply

3.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be joining the Tools team of the Network Services Department as a Golang Developer in Chennai, India. Your role will involve designing and developing innovative self-service and automation tools to enhance the productivity and quality of network infrastructure tasks. These custom-built software solutions empower network teams with automation capabilities, intuitive dashboards, and self-service features, ultimately driving efficiency and excellence in enterprise network infrastructure operations. Your responsibilities will include designing, developing, and delivering software products to improve network infrastructure operations, full stack development, troubleshooting tools, and optimizing their performance. You will also develop network tools using GoLang and other relevant technologies in a Linux environment, manage the full lifecycle of network tools including deployment and troubleshooting, design and develop APIs, and lead the development of tools to integrate, manage, and maintain enterprise network infrastructure. Additionally, you will drive technical decisions, support service owners with documentation, collaborate with Network Service teams, and be on call for application support when needed. The ideal candidate will have around 8 years of overall software development experience, with at least 2 years of expertise in Go Language and SQL. Extensive experience with JavaScript, including libraries like jQuery and Bootstrap, and proficiency in customizing CSS is required. Additionally, experience in developing and managing APIs, working in a Linux/Unix environment, and proficiency in bash scripting and Linux commands are essential. Experience with tools and pipelines like Rally, GitHub, Jenkins, and Jira is desirable. Strong troubleshooting, debugging, analytical, and problem-solving skills are crucial, along with good communication skills and the ability to quickly learn new technologies. Preferred skills include infrastructure automation experience, networking knowledge, DevOps practices, GCP development experience, Docker containerization, and prior experience in network infrastructure or CCNA certification.,

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Junior DevOps Engineer based in Bangalore, you will be responsible for designing, implementing, and maintaining CI/CD pipelines utilizing tools like Jenkins, GitHub Actions, or GitLab CI. Your role will involve managing cloud infrastructure, preferably on AWS, focusing on scalability, reliability, and security. Additionally, you will deploy and oversee containerized applications using Docker and Kubernetes, while automating infrastructure provisioning through tools like Terraform or Ansible. Monitoring system performance and troubleshooting issues will be part of your responsibilities using tools such as Prometheus, Grafana, and ELK. Collaboration with development, QA, and operations teams will be essential to ensure seamless deployments. Your technical skills should include expertise in CI/CD tools like Jenkins, Git, and GitHub/GitLab, cloud services such as AWS (EC2, S3, IAM, CloudWatch), container technologies like Docker and Kubernetes, Infrastructure as Code (IaC) tools like Terraform or Ansible, scripting languages like Bash or Python, and monitoring tools like Prometheus, Grafana, and ELK Stack. To be eligible for this role, you must have a minimum of 2 years of DevOps experience, strong troubleshooting capabilities, and effective communication skills. You should also be willing to work full-time from our Bangalore office. If you meet the above requirements and are ready to take on this challenging position, we look forward to reviewing your application.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an AWS Consultant specializing in Infrastructure, Data & AI, and Databricks, you will play a crucial role in designing, implementing, and optimizing AWS Infrastructure solutions. Your expertise will be utilized to deliver secure and scalable data solutions using various AWS services and platforms. Your responsibilities will also include architecting and implementing ETL/ELT pipelines, data lakes, and distributed compute frameworks. You will be expected to work on automation and infrastructure as code using tools like CloudFormation or Terraform, and manage deployments through AWS CodePipeline, GitHub Actions, or Jenkins. Collaboration with internal teams and clients to gather requirements, assess current-state environments, and define cloud transformation strategies will be a key aspect of your role. Your support during pre-sales and delivery cycles will involve contributing to RFPs, SOWs, LOEs, solution blueprints, and technical documentation. Ensuring best practices in cloud security, cost governance, and compliance will be a priority. The ideal candidate for this position will possess 3 to 5 years of hands-on experience with AWS services, a Bachelor's degree or equivalent experience, and a strong understanding of cloud networking, IAM, security best practices, and hybrid connectivity. Proficiency in Databricks on AWS, experience with data modeling, ETL frameworks, and working with structured/unstructured data are required skills. Additionally, you should have working knowledge of DevOps tools and processes in the AWS ecosystem, strong documentation skills, and excellent communication abilities to translate business needs into technical solutions. Preferred certifications for this role include AWS Certified Solutions Architect - Associate or Professional, AWS Certified Data Analytics - Specialty (preferred), and Databricks Certified Data Engineer Associate/Professional (a plus).,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Network Security Engineer- L3 at Futurism Tech on behalf of our client in Hinjewadi Phase-1, Pune, you will be responsible for designing and implementing enterprise-wide network solutions with a strong emphasis on network security. Your key responsibilities will include implementing and managing firewalls, crafting firewall policies, developing trust zone models, and handling 3rd party connectivity requests. Additionally, you will be involved in conducting technical analysis and planning for the deployment of evolving technology solutions while leveraging architecture standards and design guides. You should have expertise in coding or scripting skills such as Python, Ansible, or Terraform, as well as experience with network automation processes and tools. A certification like CCNP, CCIE, CCDP, security cert, or cloud cert would be beneficial, along with at least 5 years of experience in enterprise networking covering areas like BGP, OSPF, LAN/WAN, VPNs, and Firewalls. A strong command over Cisco IOS, NXOS, MPLS, and VXLAN is also required. Your required skills should include proficiency in layer 3 segmentation, advanced routing skills like VRF, MPLS, OSPF, BGP, and significant experience with Aruba Wireless Environment, Palo Alto Firewalls, F5 Load Balancer, Cisco routing and switching, and TCP/IP. You should also have experience in designing, building, and monitoring various network environments like LAN, MAN, WAN, SD-WAN, MPLS, Internet, VPN, WiFi, and data center networks. Additionally, you will be expected to provide L2/L3 support for Cisco switches, FortiGate firewall, and Secure SD-WAN solutions. A background as a Senior Network Engineer with CCNP/CCIE certification and practical experience in routing protocols like BGP, IS-IS, OSPF, Spanning Tree Protocol, VXLAN fabric networks, and MPLS label switching is essential. Knowledge of current-era data center network design, load balancers, remote access VPNs, Next Gen Firewalls, compliance-related regulations like PCI, HIPAA, GDPR, and Network Access Control solutions will be advantageous.,

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

haryana

On-site

The Tech Consultant - Data & Cloud role involves supporting a leading international client with expertise in data engineering, cloud platforms, and big data technologies. As a skilled professional, you will contribute to large-scale data initiatives, implement cloud-based solutions, and collaborate with stakeholders to drive data-driven innovation. You will design scalable data architectures, optimize ETL processes, and leverage cloud technologies to deliver impactful business solutions. Key Responsibilities Data Engineering & ETL: Develop and optimize data pipelines using Apache Spark, Airflow, Sqoop, and Databricks for seamless data transformation and integration. Cloud & Infrastructure Management: Design and implement cloud-native solutions using AWS, GCP, or Azure, ensuring scalability, security, and performance. Big Data & Analytics: Work with Hadoop, Snowflake, Data Lake, and Hive to enable advanced analytics and business intelligence capabilities. Technical Excellence: Utilize Python, SQL, and cloud data warehousing solutions to drive efficiency in data processing and analytics. Agile & DevOps Best Practices: Implement CI/CD pipelines, DevOps methodologies, and Agile workflows for seamless development and deployment. Stakeholder Collaboration: Work closely with business and technology teams to translate complex data challenges into business-driven solutions. Required Qualifications & Skills 5 - 10 years of experience in data engineering, analytics, and cloud-based solutions. Strong knowledge of Big Data technologies (Hadoop, Spark, Snowflake, Hive, Databricks, Airflow, AWS). Experience with ETL pipelines, data lakes, and large-scale data processing. Proficiency in Python, SQL, and cloud data warehousing solutions. Hands-on experience in cloud platforms (AWS, Azure, GCP) and infrastructure as code (Terraform, CloudFormation). Familiarity with containerization (Docker, Kubernetes) and BI tools (Tableau, Power BI). Understanding of Agile, Scrum, and DevOps best practices. Strong communication, problem-solving, and collaboration skills. Why Join Us Work on impactful global data projects for a leading international client. Lucrative Retention Bonus: Up to 20% bonus at the end of the first year, based on performance. Career Growth & Training: Access to world-class learning in advanced cloud, AI, and analytics technologies. Collaborative & High-Performance Culture: Work in a dynamic environment that fosters innovation, leadership, and technical excellence. About Us We are a trusted technology partner specializing in enterprise data solutions, cloud transformation, and analytics-driven decision-making. Our expertise in big data, AI, and cloud infrastructure enables us to deliver scalable, high-value solutions to global enterprises.,

Posted 2 days ago

Apply

9.0 - 13.0 years

0 Lacs

karnataka

On-site

At Capgemini Engineering, as a Devops Lead, you will be part of the world leader in engineering services, collaborating with a global team of engineers, scientists, and architects to support the most innovative companies in reaching their full potential. From cutting-edge technologies like autonomous cars to life-saving robots, our digital and software technology experts consistently push boundaries to deliver exceptional R&D and engineering services across diverse industries. Join us for a career filled with endless opportunities where you can truly make a difference, and each day is a new and exciting challenge. In this role, you will be responsible for developing and applying engineering practices and knowledge across various technologies such as Standards and protocols, application software, embedded software for wireless and satellite networks, fixed networks, enterprise networks, connected devices IoT, and device engineering, connected applications 5G edge, B2X apps, and Telco Cloud, Automation, and Edge Compute platforms. Additionally, you will lead the integration of network systems and operations within the aforementioned technologies. As a Devops Engineer at Capgemini, you will primarily work with Kubernetes, Terraform, AWS, and Python to streamline operations and optimize processes. Your role will contribute significantly to Capgemini's mission as a global business and technology transformation partner, facilitating organizations in their journey towards a digital and sustainable future while driving positive impact for enterprises and society. Operating with a diverse team of 340,000 members across more than 50 countries, Capgemini boasts over 55 years of experience, earning the trust of clients to leverage technology effectively to meet their business requirements comprehensively. Our end-to-end services and solutions encompass strategic planning, design, engineering, and are enriched by our leading capabilities in AI, generative AI, cloud, and data, supported by deep industry knowledge and a robust partner ecosystem.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should have strong experience with GITOps principles and tools, such as Argo CD. Proven experience in implementing and managing CI/CD pipelines is required. Hands-on experience with Microsoft Azure cloud services is a must, along with expertise in Azure DevOps for pipeline and repository management. Proficiency in Pulumi or Terraform for Infrastructure as Code (IaC) is essential. A solid understanding of Docker, Kubernetes, and container orchestration is also necessary. Experience with monitoring tools, particularly Datadog, is a plus. You should possess a deep understanding of cloud security best practices and secure deployment pipelines. A strong background in Linux systems administration and good knowledge of networking concepts like DNS, TCP/IP, firewalls, and load balancing are important. Familiarity with scripting languages such as Bash and Python is beneficial. Preferred qualifications include certifications in Azure, Kubernetes (CKA/CKAD), or DevOps tools. Experience in managing hybrid cloud environments and exposure to Agile and DevSecOps practices are desired. Strong communication skills are essential for effective collaboration with teams. Your responsibilities will include designing, building, and maintaining CI/CD pipelines using Azure DevOps, implementing and managing GitOps workflows using Argo CD, managing and deploying infrastructure using Pulumi and Terraform, and building and managing containerized applications using Docker and Kubernetes. Monitoring applications and infrastructure using Datadog and driving cloud infrastructure automation and orchestration on Microsoft Azure are key tasks. Ensuring security best practices across DevOps pipelines and cloud infrastructure, managing system configurations, troubleshooting Linux server issues, and optimizing networking for cloud-native environments are vital responsibilities. Collaborating with development, QA, and security teams to ensure smooth deployments and system stability is crucial. You will need to continuously evaluate and adopt new tools and technologies to improve automation and operational efficiency. Your deliverables will include CI/CD automation, delivering scalable infrastructure using Pulumi or Terraform, setting up and managing GitOps workflows with Argo CD, deploying, managing, and optimizing resources in Microsoft Azure, building Docker containers, managing deployments using Kubernetes (AKS preferred), setting up dashboards, alerts, and logs using Datadog, integrating cloud and pipeline security best practices, managing Linux systems and cloud networking configurations, and maintaining clear technical documentation while collaborating with cross-functional teams. The expected effort for this role is 160 hrs * 1 PMO * 24 months. The required experience is 3-6 years.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As an ideal candidate for this role, you should have hands-on experience with Ansible Tower. Your expertise should also include working with DevOps tools such as Kubernetes, Terraform, GitHub, Jenkins, Ansible Playbook, and Ansible Tower. You should be well-versed in Jenkins pipeline and possess a strong background in Linux administration within a complex multi-tier enterprise infrastructure environment. Moreover, you must demonstrate in-depth knowledge of Ansible Tower and Ansible Engine. Your automation and scripting skills should be top-notch, including proficiency in scripting languages like Groovy. Your ability to utilize these tools effectively will be crucial in streamlining and optimizing our processes.,

Posted 2 days ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

At Iron Mountain, we believe that work, when done well, can have a positive impact on our customers, employees, and the planet. That's why we are looking for smart and committed individuals to join our team. Whether you are starting your career or seeking a change, we invite you to explore how you can enhance the impact of your work at Iron Mountain. We offer expert and sustainable solutions in records and information management, digital transformation services, data centers, asset lifecycle management, and fine art storage, handling, and logistics. Collaborating with over 225,000 customers worldwide, we aim to preserve valuable artifacts, optimize inventory, and safeguard data privacy through innovative and socially responsible practices. If you are interested in being part of our growth journey and expanding your skills in a culture that values diverse contributions, let's have a conversation. As Iron Mountain progresses with its digital transformation, we are expanding our Enterprise Data Platform Team, which plays a crucial role in supporting data integration solutions, reporting, and analytics. The team focuses on maintaining and enhancing data platform components essential for delivering our data solutions. As a Data Platform Engineer at Iron Mountain, you will leverage your advanced knowledge of cloud big data technologies, software development expertise, and strong SQL skills. The ideal candidate will have a background in software development and big data engineering, with experience working in a remote environment and supporting both on-shore and off-shore engineering teams. Key Responsibilities: - Building and operationalizing cloud-based platform components - Developing production-quality ingestion pipelines with automated quality checks to centralize access to all data sets - Assessing current system architecture and recommending solutions for improvement - Building automation using Python modules to support product development and data analytics initiatives - Ensuring maximum uptime of the platform by utilizing cloud technologies such as Kubernetes, Terraform, Docker, etc. - Resolving technical issues promptly and providing guidance to development teams - Researching current and emerging technologies and proposing necessary changes - Assessing the business impact of technical decisions and participating in collaborative environments to foster new ideas - Maintaining comprehensive documentation on processes and decision-making Your Qualifications: - Experience with DevOps/Automation tools to minimize operational overhead - Ability to contribute to self-organizing teams within the Agile/Scrum project methodology - Bachelor's Degree in Computer Science or related field - 3+ years of related IT experience - 1+ years of experience building complex ETL pipelines with dependency management - 2+ years of experience in Big Data technologies such as Spark, Hive, Hadoop, etc. - Industry-recognized certifications - Strong familiarity with PaaS services, containers, and orchestrations - Excellent verbal and written communication skills What's in it for you - Be part of a global organization focused on transformation and innovation - A supportive environment where you can voice your opinions and be your authentic self - Global connectivity to learn from teammates across 52 countries - Embrace diversity, inclusion, and differences within a winning team - Competitive Total Reward offerings to support your career, family, wellness, and retirement Iron Mountain is a global leader in storage and information management services, trusted by organizations worldwide. We safeguard critical business information, sensitive data, and cultural artifacts. Our services help lower costs, mitigate risks, comply with regulations, and enable digital solutions. If you require accommodations due to a disability, please reach out to us. Category: Information Technology,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

Congratulations on taking the first step towards securing a career-defining role with Seclore, where you have the opportunity to join a team of superheroes dedicated to safeguarding data wherever it goes. At Seclore, we specialize in protecting and controlling digital assets to assist enterprises in preventing data theft and achieving compliance. Our solutions allow for granular assignment and revocation of permissions, dynamic enterprise-level access settings, asset discovery, and automated policy enforcement to help organizations adapt to evolving security threats and regulatory requirements in real-time and at scale. If you are a risk-taker, innovator, and fearless problem solver who thrives on challenges related to data security, then you will love being a part of our tribe. Position: Senior Cloud Automation Engineer Experience: 4-7 Years Location: Mumbai As a Senior Cloud Automation Engineer at Seclore, you will be part of a dynamic team that values self-motivated and highly energetic individuals capable of generating multiple solutions for a given problem and contributing to decision-making processes in a super-agile environment. Key responsibilities include: - Automating cloud solutions using industry-standard tools with an "Infrastructure as Code" mindset. - Researching and staying updated on cloud provider capabilities. - Providing support to operational and stakeholder teams to ensure business continuity and customer satisfaction. - Automating monitoring tools to maintain system health and reliability for high uptime requirements. - Ensuring compliance with standards, policies, and procedures. Requirements for potential Entrepreneur At Seclore: - Technical degree (Engineering, MCA) from a reputed institute. - 4+ years of experience working with AWS. - 3+ years of experience with Jenkins, Docker, Git, Ansible. - 5-6 years of total relevant experience. - Strong automation-first mindset. - Effective communication skills (verbal & written) and excellent priority management. - Experience managing multiple production workloads on AWS. - Familiarity with scripting (Python and Bash), configuration management (Ansible / Puppet), containers (Docker), databases (Oracle RDS), Infrastructure as Code (Terraform / CloudFormation), and building secure and scalable infrastructure. At Seclore, we view our team members as Entrepreneurs rather than Employees, encouraging initiative, risk-taking, problem-solving attitude, and tech-agnostic aptitude. Join us to work with the brightest minds in the industry and be a part of a supportive and open culture that fosters growth and innovation. If you are ready to shape the future of data security and become the next Entrepreneur at Seclore, apply today! Don't worry if some of the requirements are missing from your resume we are here to help you build it together.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Dataction is a forward-thinking technology services firm that delivers top-notch information technology, analytics, and consulting services to esteemed international organizations. Since its establishment in 2010, Dataction has experienced significant growth and has earned a reputation for offering innovative and dependable services to a diverse clientele across various industries. At Dataction, we have a unique approach of connecting all the dots and reimagining every business process. Our team adopts a lean, agile, and consultative methodology to tackle challenges and drive execution, enabling our clients to achieve sustainable growth, secure profitability, and ensure a promising future. Our team members are known for their dedication, courage, and willingness to push boundaries, making Dataction an inspiring and dynamic workplace. As a Sr. Engineer Data Support at Dataction, you will play a critical role within our Data Engineering support team. Your primary responsibility will be to ensure the smooth operation, stability, and performance of our data platforms and pipelines by implementing enhancements, addressing issues, and optimizing data operations. The ideal candidate should possess a robust technical background in Snowflake, DBT, SQL, AWS, and various data integration tools, along with exceptional problem-solving skills and effective communication abilities. **Responsibilities:** - Collaborate with Engineering and Business teams to resolve live issues and implement improvements. - Investigate data-related problems, troubleshoot discrepancies, and deliver timely solutions. - Enhance operational efficiency by automating manual processes through scripting and tool integrations. - Conduct regular health checks and proactive monitoring to maintain system stability and performance. - Manage incident response activities, including issue triage, root cause analysis, and resolution coordination. - Keep stakeholders informed about platform status, incidents, and resolutions. - Administer and maintain platforms across AWS, Snowflake, DBT, and related technologies. - Ensure smooth data operations by working with data pipelines and tools like HVR, Stitch, Fivetran, and Terraform. - Continuously enhance monitoring, alerting, and operational workflows to minimize downtime and boost performance. **Qualifications, Skills, And Experience:** - 5+ years of relevant experience in data operations. - Bachelor's degree in Computer Science, Information Systems, or a related field. - Proficiency in AWS, Snowflake, DBT, HVR, Stitch, Fivetran, and Terraform for managing data platforms. - Strong analytical and problem-solving skills for issue diagnosis and resolution. - Experience in incident management, system monitoring, and automation. - Ability to script in SQL and Python for automation and data analysis. - Effective communication with both technical and non-technical stakeholders. If you are looking for a workplace that values fairness, meritocracy, empowerment, and opportunities, Dataction is the perfect fit for you. In addition to a competitive salary, joining Dataction offers: - Excellent work-life balance with a hybrid work arrangement. - Company-funded skill enhancement and training opportunities. - Exciting reward and recognition programs. - Engaging employee engagement initiatives to bond with colleagues. - On-the-job learning exposure through involvement in new product/ideation teams. - Quarterly one-on-one sessions with the CEO for insights on any topic of your choice.,

Posted 2 days ago

Apply

6.0 - 10.0 years

0 - 0 Lacs

hyderabad, telangana

On-site

As a PeopleSoft Technical Consultant specializing in Campus Solutions, you will be responsible for collaborating with clients to design secure and functional GKE services. Your role will involve developing opinionated Terraform scripts for repeatable GKE cluster deployments and configuring and managing GKE deployments using the Atmos IaC framework. Additionally, you will play a key role in building GitHub Actions pipelines for automated GKE infrastructure delivery. To excel in this role, you are required to have 6-10 years of experience in the field. It is essential to possess certifications such as GCP Professional Cloud Security Engineer or Professional Cloud Architect. The position is based in Hyderabad and Bangalore, with a shift timing of 11:00 am to 8:00 pm. The compensation for this position ranges from 12 to 22 LPA based on your work experience. If you are looking for a challenging opportunity to work with cutting-edge technologies and contribute to the development of innovative solutions, this role is ideal for you. Join our team and be a part of a dynamic environment where your skills and expertise will be valued and nurtured.,

Posted 2 days ago

Apply

15.0 years

0 Lacs

Delhi, India

Remote

About HighLevel: HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. About the Role: With product and customer adoption accelerating, we’re unifying three critical pillars—Front-End Platform, Data Platform, and Core Platform/Infrastructure—under a leader to drive consistency, reliability, and velocity. You will define the strategy, mentor the team, and own the platform roadmap from developer experience to production uptime while leading our new Data Residency initiative to keep customer data within required geopolitical boundaries. Ideal Candidate Profile: A Technical Strategist: You have deep, hands-on experience with modern cloud-native ecosystems. You are not expected to code daily, but you can comfortably lead technical discussions on topics like: Cloud Providers: AWS, GCP, or Azure Infrastructure as Code: Terraform, Pulumi, or CloudFormation Containerization: Kubernetes, Docker CI/CD: Jenkins, GitLab CI, GitHub Actions, or similar Data Technologies: Experience with data warehousing (e.g., Snowflake, BigQuery) and data orchestration (e.g., Airflow, dbt, Dagster) Frontend Ecosystem: A solid understanding of the challenges in modern frontend development A Product Thinker: You have a customer-centric approach and experience treating internal platforms as products. You know how to build a roadmap, prioritize effectively, and communicate with your users. An Excellent Communicator: You can articulate a complex technical vision or strategy to both technical and non-technical stakeholders, generating buy-in and excitement across the organization. Responsibilities: Platform Vision & Strategy: Craft and socialize a 12–18-month roadmap that aligns business goals with engineering velocity. Prioritize “paved roads” for micro-frontends, micro-services, data pipelines, and infra services, and multi-region deployments to satisfy data-residency commitments Front-End Platform: Build frameworks to drive consistency with reusable components and quality gates for Vue/TypeScript apps; eliminate repeated boilerplate and cut mean setup time Data Platform & Residency: Standardize data ingestion, governance, lineage, and observability across MongoDB, Firestore, and Elasticsearch; introduce contract testing to guarantee schema compatibility. Roll out a data-residency architecture (e.g., multi-regional clusters, customer-pinning, encryption key isolation) that meets EU, US, and APAC requirements Core Infra & Cloud: Own GKE clusters, networking, WAF/CDN, secrets, and Terraform/IaC; and cloud-cost optimization DevEx & Reliability: Champion GitHub + Jenkins pipelines, progressive delivery, chaos experiments, and golden-path logging/open-telemetry standards Security, Compliance & Data Residency: Partner with Security to embed SOC 2/HIPAA controls, shift-left scanning, and policy as code and regional compliance playbooks (GDPR, CCPA, PDPB, etc.) together with Legal/Security People Leadership: Coach & grow the team of engineers (platform, SRE, data) to a high-trust, high-ownership culture Stakeholder Communication: Translate platform metrics (lead-time, change-failure-rate, MTTR, cost) into actionable narratives for Engineering, Product, and Exec teams Requirements: 15+ years total engineering experience, 5+ years leading platform/SRE/cloud teams for SaaS at scale Proven success running multi-disciplinary platforms (frontend, data, infra) on a major cloud (GCP preferred) and Kubernetes Hands-on depth with TypeScript/Node, container orchestration, Terraform/Helm, service meshes, and event-driven architectures Demonstrated delivery of data-residency or multi-region architectures—experience with GDPR-compliant EU clusters, US-only deployments, or similar Track record of instituting CI/CD, contract testing, observability (Prometheus/Grafana), and chaos engineering Comfort with regulated environments (SOC 2, HIPAA, or similar) Excellent people-leadership and cross-functional communication skills; able to influence from board-level vision to code-level reviews. Foster a strong, inclusive engineering culture of ownership, collaboration, and operational excellence Bonus Points: Experience managing managers and leading a multi-layered engineering organization Experience with FinOps and driving cloud cost optimization initiatives Familiarity with Vue, Vite, and monorepo tooling EEO Statement: At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You are being hired for the position of SAS CitDev Associate Engineer at Bangalore, India. As an Analyst, your primary responsibility will be to design, develop, and maintain AI-powered chatbots and conversational systems using Dialogflow CX, Vertex AI, and Terraform. You should have strong programming skills in Python and expertise in deploying scalable AI models and infrastructure through Terraform and Google Cloud Platform (GCP). Collaboration with cross-functional teams is crucial to deliver intelligent and automated customer service solutions. Some of the benefits you will enjoy as part of the flexible scheme include the best in class leave policy, gender-neutral parental leaves, childcare assistance benefit reimbursement, sponsorship for industry-relevant certifications, Employee Assistance Program, comprehensive hospitalization insurance, accident and term life insurance, and complementary health screening for individuals aged 35 and above. Your key responsibilities will include designing and implementing AI-driven chatbots using Dialogflow CX and Vertex AI, developing and deploying conversational flows, intents, entities, and integrations, utilizing Terraform for managing cloud infrastructure, utilizing GCP services for deploying AI models, maintaining Python scripts for data processing, collaborating with data scientists for integrating ML models, ensuring data security, scalability, and reliability of AI systems, monitoring chatbot performance, and creating technical documentation for AI systems and infrastructure. To excel in this role, you should have proven experience with GCP services, strong programming skills in Python, experience with Terraform, hands-on experience with Dialogflow CX and Vertex AI, familiarity with deploying scalable AI models and infrastructure, excellent problem-solving skills, attention to detail, and the ability to collaborate effectively with cross-functional teams. You will receive training and development opportunities, coaching and support from experts in your team, and a culture of continuous learning to aid progression. The company, Deutsche Bank Group, promotes a positive, fair, and inclusive work environment and welcomes applications from all individuals. Visit the company website for further information: https://www.db.com/company/company.htm.,

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You are seeking a skilled Observability & Site Reliability Engineer to join the team in supporting large-scale, enterprise-grade infrastructure. The ideal candidate will have extensive experience with observability tools such as Grafana, Loki, Mimir, and Kubernetes metrics/logs, and a strong passion for performance, scalability, and system uptime. It is essential that candidates have 8 to 12 years of experience and can join within immediate to 30 days notice period. Key Must-Have Skills: - 5+ years of experience in Observability Engineering. - Expertise in Grafana, Loki, Mimir, and Alloy agent. - Strong understanding of infrastructure metrics like GPU, CPU, and Kubernetes. - Proficiency in scripting languages such as Python, Go, and Bash. - Prior exposure to tools like Prometheus, ELK, Docker, and Terraform. - Flexibility to collaborate with Korean stakeholders and work within the Korean time zone. Role Highlights: - Design and manage the observability stack across large-scale data center infrastructure. - Build scalable telemetry systems, dashboards, alerts, and reports. - Apply Site Reliability Engineering (SRE) best practices to ensure system reliability and performance. - Troubleshoot real-time issues and contribute to ongoing system optimization. Good To Have: - Previous experience working with Korean stakeholders. - Familiarity with cloud platforms like AWS, GCP, or Azure.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The ideal candidate for this position should possess a Bachelor's degree in computer science or equivalent and have 3 to 5 years of experience in AWS cloud services. Additionally, the candidate should have 3 to 5 years of experience in architecting and implementing fully automated, secure, reliable, scalable, and resilient multi-cloud/hybrid-cloud solutions. A successful candidate should have a proven history of developing scripts to automate infrastructure tasks and be a seasoned Infrastructure as Code developer, with a strong preference for Terraform proficiency. Experience with Identity and Access Management, as well as practical experience with version control systems (preferably Git), is also required. Furthermore, the candidate should have production-level experience with containerization (specifically Docker) and orchestration (such as Kubernetes). Proficiency in scripting languages like Bash and Python is essential. Strong written and verbal communication skills are necessary for effective collaboration in a cross-functional environment. The candidate should be familiar with Agile methodology concepts and be able to thrive in a collaborative work environment.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be responsible for owning the lifecycle of OpenShift clusters, both on-prem and in the public cloud. This includes tasks such as architecture design, provisioning, upgrades, patching, and ensuring high availability. Additionally, you will administer and support OpenShift Container Platform as well as other Kubernetes-based environments like EKS, AKS, and GKE. As part of your role, you will serve as the primary escalation point for complex incidents, ensuring root cause resolution and driving continuous improvement. You will oversee cluster-wide networking, storage integration, ingress/egress configurations, and ensure the secure exposure of workloads. In the event of incidents, you will lead incident response efforts, coordinate stakeholders, and conduct post-mortem reviews with precision. You will also assist in change management for container platform lifecycle events, which includes tasks such as OpenShift version upgrades, SysAdmin responsibilities, hotfix deployments, and feature enhancements. Your contribution to Airtel's Enterprise Container Strategy will involve identifying opportunities for performance optimization, availability improvements, and enhancing resiliency. In the realm of architecture, configuration, and automation, you will architect GitOps-driven CI/CD workflows using tools like Argo CD, Tekton Pipelines, AirFlow, Helm, and S2I. You will lead the implementation and optimization of monitoring and alerting systems using technologies such as Prometheus, Grafana, Alert manager, and the ELK stack. Automation of operational processes using Python, Bash, and Ansible will be crucial to reduce manual toil and enhance system resilience. It will also be your responsibility to ensure configuration consistency using Infrastructure-as-Code tools such as Terraform and Ansible. Regarding security, compliance, and governance, you will define and enforce security policies including RBAC, Network Policies, Pod Security Policies, and image scanning tools. Leading security assessments, remediation of vulnerabilities, and enforcing policies aligned with compliance mandates will be part of your duties. You will collaborate with the Information Security (InfoSec) teams to implement audit logging, incident response controls, and measures to harden containers. In the realm of networking, storage, and system integration, you will lead advanced OpenShift networking operations such as ingress controller tuning, multi-tenant isolation, MetalLB, hybrid DNS, service meshes (Istio), and egress control. Integration of persistent storage solutions like Ceph, SAN/NAS, and Object Storage using CSI drivers, dynamic provisioning, and performance tuning will also fall under your purview.,

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineer Developer - Analyst at Goldman Sachs in Bengaluru, your role encompasses the discipline of Site Reliability Engineering (SRE). SRE combines software and systems engineering to construct and manage large-scale, fault-tolerant systems. In this position, you are entrusted with the critical responsibility of ensuring the availability and reliability of the firm's platform services to meet the needs of both internal and external users. Collaboration with business stakeholders is a key aspect of your work to develop and sustain production systems that can adapt swiftly to the dynamic global business landscape of the organization. The SRE team focuses on the development and maintenance of platforms that facilitate adherence to Observability requirements and SLA Management by GS Engineering Teams. Your responsibilities include the design, development, and operation of distributed systems that offer observability for Goldman's mission-critical applications and platform services across on-premises data centers and various public cloud environments. The team's core functions involve the provision of tools for alerting, metrics and monitoring, log collection and analysis, as well as tracing. These tools are utilized by numerous engineers daily, emphasizing the paramount importance of reliability in system features. In your role, you will collaborate with internal stakeholders, vendors, product owners, and fellow SREs to conceptualize and implement a large-scale distributed system capable of managing alert generation, metrics collection, log collection, and trace events efficiently. Operating in a production environment spanning cloud and on-premises data centers, you will be instrumental in defining observability features and spearheading their execution. Basic qualifications for this role include a minimum of 2 years of relevant work experience and proficiency in languages such as Java, Python, Go, JavaScript, and the Spring framework. Additionally, expertise in using Terraform for Infrastructure deployment and management, along with strong programming skills encompassing code development, debugging, testing, and optimization, are essential. A solid background in algorithms, data structures, and software design, coupled with experience in distributed systems design, maintenance, and troubleshooting, is highly valued. Preferred experience for this role includes familiarity with cloud-native solutions in AWS or GCP, working knowledge of tools like Prometheus, Grafana, and PagerDuty, and experience with databases such as PostgreSQL, MongoDB, and Elasticsearch. Proficiency in open-source messaging systems like RabbitMQ and/or Kafka, as well as hands-on systems experience in UNIX/Linux and networking, especially in scaling for performance and debugging complex distributed systems, is advantageous.,

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies