Home
Jobs

7730 Terraform Jobs - Page 33

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Our Client: Distinguished Founders | Team Encultured: Transparent, "No-Heroes", Shared Ownership, Continuous Learning | Unique Serial Entrepreneurs | Multiple High-Value World Famous Exits The Role: You will lead the development of a high-scale AI Prediction Platform powering critical decisions. You will lead engineering for a data-intensive product - owning architecture, team growth, and platform scalability. What You’ll Own End-to-end development of the AI Prediction Platform-architecture, code quality, performance, and system integration. Direct management of a 10–15 member engineering team (scaling to ~20). Set direction, grow leaders, and foster a high-performance culture rooted in shared ownership and transparency. Translate business priorities into robust technical execution across product, design, and data functions (North America + India). Serve as the technical face of engineering internally and externally-owning escalations, technical positioning, and stakeholder trust. Technical Scope Tech Stack: React (TypeScript), FastAPI, Python, Databricks, Dagster, Terraform, AWS, dltHub, Nixtla, LangChain/LangGraph. Tools & Standards: Jest, Playwright, Pytest, Azure DevOps, Docker, Checkov, SonarCloud. Deep experience with full-stack engineering, distributed systems, and scalable data pipelines is essential. Hands-on background with modern SaaS architecture, TDD, and infrastructure as code. What We’re Looking For 10+ years of engineering experience with 5+ years leading engineering teams or teams-of-teams. Proven success building complex B2B or enterprise SaaS products at scale. Strong recent hands-on experience (Python, SQL, React, etc.) with architectural and production ownership. Experience managing and growing distributed engineering teams. Deep understanding of system design, DevOps culture, and AI/ML-enabled platforms. Strong cross-functional leadership with clarity in communication and execution alignment. Write to sanish@careerxperts.com to get connected! Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About American Airlines: To Care for People on Life's Journey®. Together with our American Eagle regional partners, we offer thousands of flights daily to more than 350 destinations in more than 60 countries. American Airlines is transforming the way it delivers technology to its customers and team members worldwide. American’s Tech Hub in Hyderabad, India, is our latest technology office location and home to team members who drive technical innovation and engineer unrivalled digital products to best serve American’s customers and team members. With U.S. tech hubs in Dallas-Fort Worth, Texas and Phoenix, Arizona, our new team in Hyderabad, India enables better support of our 24/7 operation and positions American to deliver industry-leading technology solutions that create a world-class customer experience. Cloud Engineering What you'll do: As noted above, this list is intended to reflect the current job but there may be additional essential functions (and certainly non-essential job functions) that are not referenced. Management will modify the job or require other tasks be performed whenever it is deemed appropriate to do so, observing, of course, any legal obligations including any collective bargaining obligations. Be a part of Business Intelligence Platform team and ensure all our systems up and running and performing optimally- Cognos, PowerBI, Tableau, Alteryx and Grafana. Support automation of Platform Infrastructure related processes using PowerShell, Python, other tools to help platform stability and scalability. Perform troubleshooting of platform related issues and other complex issues with cloud BI solutions. Windows & Linux servers, IIS, Application Gateways, Firewall and Networks, Complex SQL, etc. Perform multiple aspects involved in the development lifecycle – design, cloud engineering (Infrastructure, network, security, and administration, data modeling, testing, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Leads development of coding standards, best practices and privacy and security guidelines. Make sure the systems are security compliant and patched as per Cybersecurity guidelines All you'll need for success: Minimum Qualifications - Education & Prior Job Experience: Bachelor’s degree in computer science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 3 years business intelligence development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions. 3 years data analytics experience using SQL. 2 years of cloud development and data lake experience (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Data Lake, Azure Power Apps and Power BI. Combination of Development, Administration & Support experience in several of the following tools/platforms required: Scripting: Python, SQL, PowerShell Basic Azure Infrastructure Experience: Servers, Networking, Firewall, Storage Account, App Gateways etc. CI/CD: GitHub, Azure DevOps, Terraform BI Analytics Tool Administration on anyone of the platforms - Cognos, Tableau, Power BI, Alteryx Preferred Qualifications - Education & Prior Job Experience: 3+ years data analytics experience specifically in Business Intelligence Development, Requirements gathering and training end users. 3+ years administering data platforms (Tableau or Cognos or Power BI) at scale. 3+ years analytics solution development using agile, dev ops, product model that includes designing, developing, and implementing large-scale applications or data engineering solutions. Airline Industry Experience Skills, Licenses & Certifications: Certification in any BI tools - Administration Expertise with the Azure Technology stack for data management, data ingestion, capture, processing, curation and creating consumption layers. Expertise in providing practical direction within the Azure Native cloud services. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Noida

On-site

GlassDoor logo

About the Role HashiCorp is looking for a high-caliber customer facing engineering professional to join its Support Engineering team in Noida, India. This is an exciting opportunity to join a small team and have a direct impact on HashiCorp's fast growing business. This highly visible position will be an integral part of both the support engineering and Terraform Open Source/Enterprise teams. You are a fit if you thrive in a fast-paced culture that values essential communication, collaboration, and results. You are a self-motivated, detail-oriented individual with an eye for automation, process improvement, and problem solving. Reporting to the Manager, Support Engineering, the Support Engineer will be a key member of the Customer Success organization and will directly impact customer satisfaction and success. The Support engineer will troubleshoot complex issues related to Terraform Enterprise and independently work to find viable solutions. They will contribute to product growth and development via weekly product and marketing meetings. The Support Engineer will attend customer meetings as needed to help identify, debug and resolve the customer issue and is expected to be a liaison between the customer and HashiCorp engineering. When possible the Support Engineer will update and improve product documentation, guide feature development, and implement bug fixes based on customer feedback. RESPONSIBILITIES Triage and solve incoming support requests via Zendesk within SLA Document and record all activity and communication with customers in accordance to both internal and external security standards Reproduce and debug customer issues by building or using existing tooling or configurations Collaborate with engineers, sales engineers, sales representatives, and technical account managers to schedule, coordinate, and lead customer installs or debugging calls Contribute to create knowledge base articles, and best practices guides Continuously improve process and tools for normal, repetitive support tasks Periodic on-call rotation for production-down issues Weekly days off scheduled every week on rotation on any day of the week REQUIREMENTS 4+ years Support Engineering, Software Engineering, or System Administration experience Expertise in Open Source and SaaS is a major advantage Excellent presence; strong written and verbal communication skills Upbeat, passionate, and unparalleled customer focus Well-organized, has excellent work ethic, pays attention to detail, and self-starting Experience managing and influencing change in organizations Working knowledge with Docker, Kubernetes Familiar with networking concept Experience developing a program, script, or tool that was released or used is an advantage Strong understanding of Linux or Windows command line environments Interest in cloud adoption and technology at scale Goals : 30 days: you should be able to - Write a simple TF configuration and apply it in TFE to deploy infrastructure Holistic understanding of (P)TFE and the interaction with the TF ecosystem Successfully perform all common work flows within Terraform Enterprise One contribution to extend or improve product documentation or install guides Ability to answer Level 1 support inquiries with minimal assistance 60 days: you should be able to - Effectively triage and respond to Level 1 & 2 inquiries independently Provision and bootstrap (P)TFE instance with low-touch from engineering Ride along on 1-2 live customer install calls Locate and unpack the customer log files. Familiarity with its contents Apply TF configurations to deploy infrastructure in AWS, Azure, and Google Cloud Author one customer knowledge base article from area of subject matter expertise 90 days: you should be able to - Effectively triage and respond to a production down issue with minimal assistance Run point on a live customer install without assistance Independently find points of error and identify root cause in the customer log files and report relevant details to engineering Implement small bug fixes or feature improvements Reproduce a TF bug or error by creating a suitable configuration EDUCATION Bachelor's degree in Computer Science, IT, Technical Writing, or equivalent professional experience #LI-Hybrid #LI-SG1 "HashiCorp is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. HashiCorp will be the hiring entity. By proceeding with this application you understand that HashiCorp will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: link to IBM privacy statement ."

Posted 3 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Global Data Insight & Analytics organization is looking for a top-notch Software Engineer who has also got Machine Learning knowledge & Experience to add to our team to drive the next generation of AI/ML (Mach1ML) platform. In this role you will work in a small, cross-functional team. The position will collaborate directly and continuously with other engineers, business partners, product managers and designers from distributed locations, and will release early and often. The team you will be working on is focused on building Mach1ML platform – an AI/ML enablement platform to democratize Machine Learning across Ford enterprise (like OpenAI’s GPT, Facebook’s FBLearner, etc.) to deliver next-gen analytics innovation. We strongly believe that data has the power to help create great products and experiences which delight our customers. We believe that actionable and persistent insights, based on high quality data platform, help business and engineering make more impactful decisions. Our ambitions reach well beyond existing solutions, and we are in search of innovative individuals to join this Agile team. This is an exciting, fast-paced role which requires outstanding technical and organization skills combined with critical thinking, problem-solving and agile management tools to support team success. Responsibilities What you'll be able to do: As a Software Engineer, you will work on developing features for Mach1ML platform, support customers in model deployment using Mach1ML platform on GCP and On-prem. You will follow Rally to manage your work. You will incorporate an understanding of product functionality and customer perspective for model deployment. You will work on the cutting-edge technologies such as GCP, Kubernetes, Docker, Seldon, Tekton, Airflow, Rally, etc. Position Responsibilities: Work closely with Tech Anchor, Product Manager and Product Owner to deliver machine learning use cases using Ford Agile Framework. Work with Data Scientists and ML engineers to tackle challenging AI problems. Work specifically on the Deploy team to drive model deployment and AI/ML adoption with other internal and external systems. Help innovate by researching state-of-the-art deployment tools and share knowledge with the team. Lead by example in use of Paired Programming for cross training/upskilling, problem solving, and speed to delivery. Leverage latest GCP, CICD, ML technologies Critical Thinking: Able to influence the strategic direction of the company by finding opportunities in large, rich data sets and crafting and implementing data driven strategies that fuel growth including cost savings, revenue, and profit. Modelling: Assessments, and evaluating impacts of missing/unusable data, design and select features, develop, and implement statistical/predictive models using advanced algorithms on diverse sources of data and testing and validation of models, such as forecasting, natural language processing, pattern recognition, machine vision, supervised and unsupervised classification, decision trees, neural networks, etc. Analytics: Leverage rigorous analytical and statistical techniques to identify trends and relationships between different components of data, draw appropriate conclusions and translate analytical findings and recommendations into business strategies or engineering decisions - with statistical confidence Data Engineering: Experience with crafting ETL processes to source and link data in preparation for Model/Algorithm development. This includes domain expertise of data sets in the environment, third-party data evaluations, data quality Visualization: Build visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using applications such as Seaborn, Qlik Sense/PowerBI/Tableau/Looker Studio, etc. Qualifications Minimum Requirements we seek: Bachelor’s or master’s degree in computer science engineering or related field or a combination of education and equivalent experience. 3+ years of experience in full stack software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, Streamlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Our Preferred Requirements: Master’s degree in computer science engineering, or related field or a combination of education and equivalent experience. Demonstrated successful application of analytical methods and machine learning techniques with measurable impact on product/design/business/strategy. Proficiency in programming languages such as Python with a strong emphasis on machine learning libraries, generative AI frameworks, and monitoring tools. Utilize tools and technologies such as TensorFlow, PyTorch, scikit-learn, and other machine learning libraries to build and deploy machine learning solutions on cloud platforms. Design and implement cloud infrastructure using technologies such as Kubernetes, Terraform, and Tekton to support scalable and reliable deployment of machine learning models, generative AI models, and applications. Integrate machine learning and generative AI models into production systems on cloud platforms such as Google Cloud Platform (GCP) and ensure scalability, performance, and proactive monitoring. Implement monitoring solutions to track the performance, health, and security of systems and applications, utilizing tools such as Prometheus, Grafana, and other relevant monitoring tools. Conduct code reviews and provide constructive feedback to team members on machine learning-related projects. Knowledge and experience in agentic workflow based application development and DevOps Stay up to date with the latest trends and advancements in machine learning and data science. Show more Show less

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

DevOps Engineer (3-5 Years) Location: Lower Parel, Mumbai Expectations: Building and setting up new development tools and infrastructure. Understanding the needs of stakeholders and conveying this to developers. Working on ways to automate and improve development and release processes. Experience required: 3-5+ years of professional experience Responsibilities: Building and setting up new development tools and infrastructure Strong knowledge of AWS Strong Linux and Windows system administration background. Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Improve CI/CD tooling. Implement, maintain and improve monitoring and alerting. Build and maintain highly available systems. Testing and examining code written by others and analysing results Ensuring that systems are safe and secure against cybersecurity threats Working with software developers and software engineers to ensure that development follows established processes and works as intended Assisting Product Managers with DevOps planning, execution, and query resolution Optimise infrastructure and Experience working with Docker or Kubernetes Database (MySQL, Postgres, MongoDB, etc) installation & Management Knowledge of network technologies such as TCP/IP, DNS and load balancing Must know any one programming language. Skills required: Deploy updates and fixes Proficiency with Git Optimise infrastructure costs Provide technical support Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualisation Design procedures for system troubleshooting and maintenance Document the architecture, software used, and process followed for projects. Proficiency with at least one Infrastructure as Code (IaC) tool like Ansible, Terraform, Chef, Puppet, etc. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description: Assess and understand the application implementation while working with architects and business experts Analyse business and technology challenges and suggest solutions to meet strategic objectives Build cloud native applications meeting 12/15 factor principles on OpenShift or Kubernetes Migrate Dot Net Core and/ or Framework Web/ API/ Batch Components deployed in PCF Cloud to OpenShift, working independently Analyse and understand the code, identify bottlenecks and bugs, and devise solutions to mitigate and address these issues Design and Implement unit test scripts and automation for the same using Nunit to achieve 80% code coverage Perform back end code reviews and ensure compliance to Sonar Scans, CheckMarx and BlackDuck to maintain code quality Write Functional Automation test cases for system integration using Selenium. Coordinate with architects and business experts across the application to translate key Required Qualifications: 4+ years of experience in Dot Net Core (3.1 and above) and/or Framework (4.0 and above) development (Coding, Unit Testing, Functional Automation) implementing Micro Services, REST API/ Batch/ Web Components/ Reusable Libraries etc Proficiency in C# with a good knowledge of VB.NET Proficiency in cloud platforms (OpenShift, AWS, Google Cloud, Azure) and hybrid/multi-cloud strategies with at least 3 years in Open Shift Familiarity with cloud-native patterns, microservices, and application modernization strategies. Experience with monitoring and logging tools like Splunk, Log4J, Prometheus, Grafana, ELK Stack, AppDynamics, etc. Familiarity with infrastructure automation tools (e.g., Ansible, Terraform) and CI/CD tools (e.g., Harness, Jenkins, UDeploy). Proficiency in Database like MS SQL Server, Oracle 11g, 12c, Mongo, DB2 Experience in integrating front-end with back-end services Experience in working with Code Versioning methodology as followed with Git, GitHub Familiarity with Job Scheduler through Autosys, PCF Batch Jobs Familiarity with Scripting languages like shell / Helm chats modules" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Show more Show less

Posted 3 days ago

Apply

5.0 - 8.0 years

0 Lacs

Bihar

On-site

GlassDoor logo

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of this role is to work with Application teams and developers to facilitate better coordination amongst operations, development and testing functions by automating and streamlining the integration and deployment processes ͏ Do Align and focus on continuous integration (CI) and continuous deployment (CD) of technology in applications Plan and Execute the DevOps pipeline that supports the application life cycle across the DevOps toolchain — from planning, coding and building, testing, staging, release, configuration and monitoring Manage the IT infrastructure as per the requirement of the supported software code On-board an application on the DevOps tool and configure it as per the clients need Create user access workflows and provide user access as per the defined process Build and engineer the DevOps tool as per the customization suggested by the client Collaborate with development staff to tackle the coding and scripting needed to connect elements of the code that are required to run the software release with operating systems and production infrastructure Leverage and use tools to automate testing & deployment in a Dev-Ops environment Provide customer support/ service on the DevOps tools Timely support internal & external customers on multiple platforms Resolution of the tickets raised on these tools to be addressed & resolved within a specified TAT Ensure adequate resolution with customer satisfaction Follow escalation matrix/ process as soon as a resolution gets complicated or isn’t resolved Troubleshoot and perform root cause analysis of critical/ repeatable issues ͏ Deliver No Performance Parameter Measure 1. Continuous Integration,Deployment & Monitoring 100% error free on boarding & implementation 2. CSAT Timely customer resolution as per TAT Zero escalation ͏ ͏ Mandatory Skills: DevOps, CI/CD, Terraform and Github Actions. Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

As a Senior DevOps Engineer, you will be responsible for enhancing and integrating DevOps practices into our development and operational processes. You will work collaboratively with software development, quality assurance, and IT operations teams to implement CI/CD pipelines, automate workflows, and improve the deployment processes to ensure high-quality software delivery. Requirements Key Responsibilities: Design and implement CI/CD pipelines for automation of build, test, and deployment processes. Collaborate with development and operations teams to improve existing DevOps practices and workflows. Deploy and manage container orchestration platforms such as Kubernetes and Docker. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation. Participate in incident response and root cause analysis activities. Establish best practices for DevOps processes, security, and compliance. Qualifications and Experience: Bachelor's degree with DevOps certification 7+ years of experience in a DevOps or related role. Proficiency in cloud platforms such as AWS, Azure, or Google Cloud. Experience with CI/CD tools such as Jenkins, GitLab, or CircleCI. Developemnt (JAVA or Python ..etc) - Advanced Kubernetes usage and admin - Advanced AI - Intermediate CICD development - Advanced Strong collaboration and communication skills. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery Automate cloud infrastructure using Terraform Write unit tests, integration tests and performance tests Work in a team environment using agile practices Monitor and optimize application performance and infrastructure costs Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Work closely with cross-functional teams to ensure seamless integration and operation of services Proficiency JavaScript for full-stack development Strong experience with AWS cloud services, including EKS, Lambda, and S3 Knowledge of Docker containers and orchestration tools including Kubernetes Show more Show less

Posted 3 days ago

Apply

5.0 - 6.0 years

7 - 8 Lacs

Chennai

Work from Office

Naukri logo

Responsibilities: Lead end-to-end delivery of Golang banking/payments backend system from design to deployment, ensuring speed, reliability, and compliance with banking regulations. Mentor and guide junior developers. Collaborate with product managers, QA engineers, and DevOps teams Education: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Experience: 5-6 years of overall software development experience. At least 2 years of hands-on experience in Golang (mandatory). Proven experience building backend systems from scratch. Technical Skills (Mandatory): Backend Development: Golang expertise developing high-performance backend systems. Databases: MongoDB (preferred) OR experience with SQL databases (e.g., PostgreSQL, MySQL). Messaging Systems: NATS.io (preferred) OR Kafka, RabbitMQ, IBM MQ. API Protocols: gRPC (preferred) OR RESTful APIs. Exposure to microservices architecture and distributed systems. Experience with AI-assisted coding tools (e.g., GitHub Copilot, Cline) Familiarity with CI/CD pipelines and version control (Git). Frontend: Exposure in Angular, React, or similar frameworks Preferred Skills (Not Mandatory): Banking Domain Knowledge: ISO8583, ISO20022, ACH/WIRE, FedNow, RTP, Card Payments, Double-Entry Accounting. Cloud & DevOps: AWS, Docker, Kubernetes, Terraform, or Nomad.

Posted 3 days ago

Apply

10.0 - 15.0 years

12 - 22 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? The Senior Specialist Technical Support Engineer role is to deliver technical support to end users about how to use and administer the NICE Service and Sales Performance Management, Contact Analytics and/or WFM software solutions efficiently and effectively in fulfilling business objectives. We are seeking a highly skilled and experienced Senior Specialist Technical Support Engineer to join our global support team. In this role, you will be responsible for diagnosing and resolving complex performance issues in large-scale SaaS applications hosted on AWS. You will work closely with engineering, DevOps, and customer success teams to ensure our customers receive world-class support and performance optimization. How will you make an impact? Serve as a subject matter expert in troubleshooting performance issues across distributed SaaS environments in AWS. Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address CSS Recording and Compliance application related product issues and resolve high-level issues. Analyze logs, metrics, and traces using tools like CloudWatch, X-Ray, Datadog, New Relic, or similar. Collaborate with development and operations teams to identify root causes and implement long-term solutions. Provide technical guidance and mentorship to junior support engineers. Act as an escalation point for critical customer issues, ensuring timely resolution and communication. Develop and maintain runbooks, knowledge base articles, and diagnostic tools to improve support efficiency. Participate in on-call rotations and incident response efforts. Have you got what it takes? 10+ years of experience in technical support, site reliability engineering, or performance engineering roles. Deep understanding of AWS services such as EC2, RDS, S3, Lambda, ELB, ECS/EKS, and CloudFormation. Proven experience troubleshooting performance issues in high-availability, multi-tenant SaaS environments. Strong knowledge of networking, load balancing, and distributed systems. Proficiency in scripting languages (e.g., Python, Bash) and familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Excellent communication and customer-facing skills. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer). Experience with observability platforms (e.g., Prometheus, Grafana, Splunk). Familiarity with CI/CD pipelines and DevOps practices. Experience working in ITIL or similar support frameworks. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7554 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 3 days ago

Apply

5.0 - 15.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

TCS Hiring for PowerApps Developer Experience: 5 to 15 Years Only Job Location: New Delhi, Pune, Hyderabad TCS Hiring for PowerApps Developer Required Technical Skill Set: Power Apps Power Automate SharePoint MS Dynamics 365 CRM Dataverse Power BI PowerApps developer performs the activities below 1. System Reliability and Uptime: Systems are stable, reliable, and available by implementing monitoring and alerting systems. Quickly detect, respond to, and resolve system outages or performance degradation, often through automated systems. Conduct post-incident reviews to identify the root cause of system failures and prevent recurrence. 2. Automation and Infrastructure as Code: Use automation tools to handle repetitive tasks, like deployments, monitoring, and scaling, to reduce manual intervention. Implement and manage infrastructure as code (IAC) using tools like Terraform, Ansible, or Azure ARM templates, ensuring consistency across environments. 3. Performance Tuning and Optimization: Ensure that the infrastructure is scalable and can handle increased loads by anticipating future demands. Continuously optimize system performance, from software to hardware, by analyzing bottlenecks and implementing improvements. 4. Collaboration and Cross-Functional Communication: Work closely with dev teams to ensure reliability and scalability are built into the software from the ground up. Ensure proper documentation and knowledge sharing on system architecture, operations procedures, and incident handling. 5. Security and Compliance: Ensure systems are secure by implementing monitoring, logging, and alerting solutions for any security breaches or vulnerabilities. Ensure that systems comply with relevant regulations and industry standards (e.g., GDPR, ISO, SOC2). Kind Regards, Priyankha M Show more Show less

Posted 3 days ago

Apply

5.0 - 15.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

TCS Hiring for PowerApps Developer Experience: 5 to 15 Years Only Job Location: New Delhi, Pune, Hyderabad TCS Hiring for PowerApps Developer Required Technical Skill Set: Power Apps Power Automate SharePoint MS Dynamics 365 CRM Dataverse Power BI PowerApps developer performs the activities below 1. System Reliability and Uptime: Systems are stable, reliable, and available by implementing monitoring and alerting systems. Quickly detect, respond to, and resolve system outages or performance degradation, often through automated systems. Conduct post-incident reviews to identify the root cause of system failures and prevent recurrence. 2. Automation and Infrastructure as Code: Use automation tools to handle repetitive tasks, like deployments, monitoring, and scaling, to reduce manual intervention. Implement and manage infrastructure as code (IAC) using tools like Terraform, Ansible, or Azure ARM templates, ensuring consistency across environments. 3. Performance Tuning and Optimization: Ensure that the infrastructure is scalable and can handle increased loads by anticipating future demands. Continuously optimize system performance, from software to hardware, by analyzing bottlenecks and implementing improvements. 4. Collaboration and Cross-Functional Communication: Work closely with dev teams to ensure reliability and scalability are built into the software from the ground up. Ensure proper documentation and knowledge sharing on system architecture, operations procedures, and incident handling. 5. Security and Compliance: Ensure systems are secure by implementing monitoring, logging, and alerting solutions for any security breaches or vulnerabilities. Ensure that systems comply with relevant regulations and industry standards (e.g., GDPR, ISO, SOC2). Kind Regards, Priyankha M Show more Show less

Posted 3 days ago

Apply

3.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Summary Position Summary CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. As an AWS Infrastructure Engineer, you play a crucial role in building, and maintaining a cloud infrastructure on Amazon Web Services (AWS). You will also be responsible for the ownership of tasks assigned through SNOW, Dashboard, Order forms etc. The work you will do includes: Build and operate the Cloud infrastructure on AWS Continuously monitoring the health and performance of the infrastructure and resolving any issues. Using tools like CloudFormation, Terraform, or Ansible to automate infrastructure provisioning and configuration. Administer the EC2 instance’s OS such as Windows and Linux Working with other teams to deploy secure, scalable, and cost-effective cloud solutions based on AWS services. Implement monitoring and logging for Infra and Apps Keeping the infrastructure up-to-date with the latest security patches and software versions. Collaborate with development, operations and Security teams to establish best practices for software development, build, deployment, and infrastructure management Tasks related to IAM, Monitoring, Backup and Vulnerability Remediation Participating in performance testing and capacity planning activities Documentation, Weekly/Bi-Weekly Deck preparation, KB article update Handover and On call support during weekends on rotational basis Qualifications Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in AWS Cloud, Cloud Formation template, Windows/Linux administration Understanding of 2 tier, 3 tier or multi-tier architecture Experience on IaaS/PaaS/SaaS Understanding of Disaster recovery Networking and security expertise Knowledge on PowerShell, Shell and Python Associate/Professional level certification on AWS solution architecture ITIL Foundational certification Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Understanding of container technologies such as Docker, Kubernetes and OpenShift. Understanding of Application and other infrastructure monitoring tools Understanding of end-to-end infrastructure landscape Experience on virtualization platform Knowledge on Chef, Puppet, Bamboo, Concourse etc Knowledge on Microservices, DataLake, Machine learning etc Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with AWS, System administration, IaC etc Location: Hyderabad/ Pune The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302308 Show more Show less

Posted 3 days ago

Apply

6.0 - 8.0 years

6 - 15 Lacs

Hyderabad, Secunderabad

Work from Office

Naukri logo

Hands-on experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Knowledge of Terraform, CloudFormation, or other infrastructure automation tools. Experience with Docker, and basic knowledge of Kubernetes. Familiarity with monitoring/logging tools such as CloudWatch, Prometheus, Grafana, ELK.

Posted 3 days ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

It’s not just about your career or job title… It’s about who you are and the impact you will make on the world. Because whether it’s for each other or our customers, we put People First. When our people come together, we Expand the Possible and continuously look for ways to improve what we create and how we do it. If you are constantly striving to grow, you’re in good company. We are revolutionizing the way the world moves for future generations, and we want someone who is ready to move with us. It’s not just about your career or job title… It’s about who you are and the impact you will make on the world. Because whether it’s for each other or our customers, we put People First. When our people come together, we Expand the Possible and continuously look for ways to improve what we create and how we do it. If you are constantly striving to grow, you’re in good company. We are revolutionizing the way the world moves for future generations, and we want someone who is ready to move with us. Who are we? Wabtec Corporation is a leading global provider of equipment, systems, digital solutions, and value-added services for freight and transit rail as well as the mining, marine, and industrial markets. Drawing on nearly four centuries of collective experience across Wabtec, GE Transportation, and Faiveley Transport, the company has grown to become One Wabtec, with unmatched digital expertise, technological innovation, and world-class manufacturing and services, enabling the digital-rail-and-transit ecosystems. Wabtec is focused on performance that drives progress and unlocks our customers’ potential by delivering innovative and lasting transportation solutions that move and improve the world. We are lifelong learners obsessed with making things better to drive exceptional results. Wabtec has approximately 27K employees in facilities throughout the world. Visit our website to learn more! Engineer – DevOps Location : Bengaluru About us: To strengthen our WITEC team in Bengaluru, we are now looking for – Lead/ Engineer – DevOps Role Summary & Essential responsibilities: The DevOps Engineer is responsible for performing CI/CD and automation design / validation activities under the project responsibility of the Technical Project Manager and under the technical responsibility of the software architect. Respect internal processes including coding rules. Write documentation in accordance with the implementation made Meet the Quality, Cost and Time objectives set by the Technical Project Manager. Qualification / Requirement: Bachelor / Master’s in engineering in Computer Science with web option CS, IT or related field Abilities: 6 to 10 years of hands on experience as DevOps Engineer Profile: Good understanding of Linux systems and networking Good knowledge of CI/CD tools, GitLab Good knowledge in containerization technologies such as Docker Experience with scripting languages such as Bash and Python Hands-on setting up CI/CD pipelines and configuring Virtual Machines Experience with C/C++ build tools like CMake and Conan is a must Experience in setting up the pipelines in Gitlab for build, Unit testing, static analysis Experience with infrastructure as code tools like Terraform or Ansible is a plus Experience with monitoring and logging tools such as ELK Stack or Prometheus/Grafana, … Strong problem-solving skills and ability to troubleshoot production issues A passion for continuously learning and staying up-to-date with modern technologies and trends in the DevOps field. Project management and workflow tools like Jira, SPIRA, Teams Planner, Polarion. Process: SVN, VSS, GIT and Bitbucket source control/configuration management tool Development methodology: AGILE (SCRUM/Kanban) Soft skills: English: good level Autonomous Good Interpersonal and communication skill Good synthesis skill Solid team player and able to handle multiple tasks and manage them time efficiently. Our Commitment to Embrace Diversity: Wabtec is a global company that invests not just in our products, but also our people by embracing diversity and inclusion. We care about our relationships with our employees and take pride in celebrating the variety of experiences, expertise, and backgrounds that bring us together. At Wabtec, we aspire to create a place where we all belong and where diversity is welcomed and appreciated. To fulfill that commitment, we rely on a culture of leadership, diversity, and inclusion. We aim to employ the world’s brightest minds to help us create a limitless source of ideas and opportunities. We have created a space where everyone is given the opportunity to contribute based on their individual experiences and perspectives and recognize that these differences and diverse perspectives make us better. We believe in hiring talented people of varied backgrounds, experiences, and styles… People like you! Wabtec Corporation is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or expression, or protected Veteran status. If you have a disability or special need that requires accommodation, please let us know. Who are we? Wabtec Corporation is a leading global provider of equipment, systems, digital solutions, and value-added services for freight and transit rail as well as the mining, marine, and industrial markets. Drawing on nearly four centuries of collective experience across Wabtec, GE Transportation, and Faiveley Transport, the company has grown to become One Wabtec, with unmatched digital expertise, technological innovation, and world-class manufacturing and services, enabling the digital-rail-and-transit ecosystems. Wabtec is focused on performance that drives progress and unlocks our customers’ potential by delivering innovative and lasting transportation solutions that move and improve the world. We are lifelong learners obsessed with making things better to drive exceptional results. Wabtec has approximately 27K employees in facilities throughout the world. Visit our website to learn more! http://www.WabtecCorp.com Our Commitment to Embrace Diversity: Wabtec is a global company that invests not just in our products, but also our people by embracing diversity and inclusion. We care about our relationships with our employees and take pride in celebrating the variety of experiences, expertise, and backgrounds that bring us together. At Wabtec, we aspire to create a place where we all belong and where diversity is welcomed and appreciated. To fulfill that commitment, we rely on a culture of leadership, diversity, and inclusion. We aim to employ the world’s brightest minds to help us create a limitless source of ideas and opportunities. We have created a space where everyone is given the opportunity to contribute based on their individual experiences and perspectives and recognize that these differences and diverse perspectives make us better. We believe in hiring talented people of varied backgrounds, experiences, and styles… People like you! Wabtec Corporation is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or expression, or protected Veteran status. If you have a disability or special need that requires accommodation, please let us know. Show more Show less

Posted 3 days ago

Apply

5.0 - 10.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities Design and implement cloud-based infrastructure (AWS, Azure, or GCP) Develop and maintain CI/CD pipelines to ensure smooth deployment and delivery processes Manage containerized environments (Docker, Kubernetes) and infrastructure-as-code (Terraform, Ansible) Monitor system health, performance, and security; respond to incidents and implement fixes Collaborate with development, QA, and security teams to streamline workflows and enhance automation Lead DevOps best practices and mentor junior engineers Optimize costs, performance, and scalability of infrastructure Ensure compliance with security standards and best practices Requirements 5+ years of experience in DevOps, SRE, or related roles Strong experience with cloud platforms (AWS, Azure, GCP) Proficiency with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.) Expertise in container orchestration (Kubernetes, Helm) Solid experience with infrastructure-as-code (Terraform, CloudFormation, Ansible) Good knowledge of monitoring/logging tools (Prometheus, Grafana, ELK, Datadog) Strong scripting skills (Bash, Python, or Go)

Posted 3 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role: DevOps Engineer Duration: 6+ months Location: Bangalore /Hyd/ Chennai / Noida / Gurgaon/Pune . Hybrid – 3 days onsite Shift- 3 :00 PM IST TO 1:00 AM IST Shift timing coverage for 12x5 support U.S. EST (8 am to 5 pm EST) Responsibilities Provide DevOps implementation, design and architecture. Bring the DevOps best Practices on to the table. Responsible for supporting implementation projects. Create or contribute to technical project documentation Skills and Knowledge Strong technical knowledge (details below) Focused and driven attitude towards contributing on the creation, enhancement and delivery of complex solutions Ability to work with diverse personalities both technical and non-technical Ability to work flexible schedules particularly when software installs are scheduled Proven ability to work independently with limited supervision Excellent organization and time management skills Excellent written and verbal communication skills Ability to resolve escalated issues with a sense of urgency Education and Experience Requirements 8+ years of experience Strong knowledge & Experience in Terraform, Hands on experience in DevOps/Azure Visual Studio Integration SQL DB integration Terraform integration Strong knowledge & Experience in Pipeline building Strong knowledge & Experience in Cognito, Strong knowledge & Experience in Lambda, Strong knowledge & Experience in Node.js Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us: People Tech Group is a leading Enterprise Solutions, Digital Transformation, Data Intelligence, and Modern Operation services provider. We have started in the year 2006 at Redmond, Washington, USA and got expanded to India, and In India, we are based out of Hyderabad, Bangalore, Pune and Chennai with overall strength of 1500+ employees. We have our presence over 4 different countries US/Canada/India /Costa Rica. One of the Recent Development happened with the company, we have got acquired by Quest Global Company, Quest Global is One of the world's largest engineering Solution provider Company, it has 20,000+ employee strength, with 70+ Global Delivery Service centers, Headquarters are based in Singapore. Going forward, we all are part of Quest Global Company. Position: DevOps Engineer Company: People Tech Group Experience: 5 yrs Location: Bengaluru Job Description: Key Responsibilities: Provisioned and secured cloud infrastructure using Terraform/ AWS CloudFormation Fully automated GitLab CI/CD pipelines for application builds, tests, and deployment, integrated with Docker containers and AWS ECS/EKS Continuous integration workflows with automated security checks, testing, and performance validation A self-service developer portal providing access to system health, deployment status, logs, and documentation for seamless developer experience AWS CloudWatch Dashboards and CloudWatch Alarms for real-time monitoring of system health, performance, and availability Centralized logging via CloudWatch Logs for application performance and troubleshooting Complete documentation for all automated systems, infrastructure code, CI/CD pipelines, and monitoring setups Monitoring - Splunk - Ability to create dashboards, alerts, integrating with tools like MS teams. Required Skills: Master's or bachelor's degree in computer science/IT or equivalent Expertise in Shell scripting Familiarity with Operating system - Windows & linux Experience in Git - version control Ansible - Good to have Familiarity with CI/CD pipelines - GitLab Docker, Kubernetes, OpenShift - Strong in Kubernetes administration Experience in Infra As Code – Terraform & AWS - CloudFormation Familiarity in AWS services like EC2, Lambda, Fargate, VPC, S3, ECS, EKS Nice to have – Familiarity with observability and monitoring tools like Open Telemetry setup, Grafana, ELK stack, Prometheus Show more Show less

Posted 3 days ago

Apply

7.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Senior Cloud DevOps Engineer Experience: 7-15 years Notice Period: Immediate to 15 days Location: Hyderabad We are seeking a highly skilled GCP DevOps Engineer to join our dynamic team. Job Description Deep GCP Services Mastery: Profound understanding and hands-on experience with core GCP services (Compute Engine, CloudRun, Cloud Storage, VPC, IAM, Cloud SQL, BigQuery, Cloud Operations Suite). Infrastructure as Code (IaC) & Configuration Management: Expertise in Terraform for GCP, and proficiency with tools like Ansible for automating infrastructure provisioning and management. CI/CD Pipeline Design & Automation: Skill in building and managing sophisticated CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitLab CI) for applications and infrastructure on GCP. Containerisation & Orchestration: Advanced knowledge of Docker and extensive experience deploying, managing, and scaling applications on CloudRun and/or Google Kubernetes Engine (GKE). API Management & Gateway Proficiency: Experience with API design, security, and lifecycle management, utilizing tools like Google Cloud API Gateway or Apigee for robust API delivery. Advanced Monitoring, Logging & Observability: Expertise in implementing and utilizing comprehensive monitoring solutions (e.g., Google Cloud Operations Suite, Prometheus, Grafana) for proactive issue detection and system insight. DevSecOps & GCP Security Best Practices: Strong ability to integrate security into all stages of the DevOps lifecycle, implement GCP security best practices (IAM, network security, data protection), and ensure compliance. Scripting & Programming for Automation: Proficient in scripting languages (Python, Bash, Go) to automate operational tasks, build custom tools, and manage infrastructure programmatically. GCP Networking Design & Management: In-depth understanding of GCP networking (VPC, Load Balancing, DNS, firewalls) and the ability to design secure and scalable network architectures. Application Deployment Strategies & Microservices on GCP: Knowledge of various deployment techniques (blue/green, canary) and experience deploying and managing microservices architectures within the GCP ecosystem. Leadership, Mentorship & Cross-Functional Collaboration: Proven ability to lead and mentor DevOps teams, drive technical vision, and effectively collaborate with development, operations, and security teams. System Architecture, Performance Optimization & Troubleshooting: Strong skills in designing scalable and resilient systems on GCP, identifying and resolving performance bottlenecks, and complex troubleshooting across the stack. Regards, ValueLabs Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Designation: Solution Architect Office Location: Gurgaon Position Description: As a Solution Architect, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team & close collaboration with business and product teams. Primary Responsibilities: Design end-to-end solutions that meet business requirements and align with the enterprise architecture. Define the architecture blueprint, including integration, data flow, application, and infrastructure components. Evaluate and select appropriate technology stacks, tools, and frameworks. Ensure proposed solutions are scalable, maintainable, and secure. Collaborate with business and technical stakeholders to gather requirements and clarify objectives. Act as a bridge between business problems and technology solutions. Guide development teams during the execution phase to ensure solutions are implemented according to design. Identify and mitigate architectural risks and issues. Ensure compliance with architecture principles, standards, policies, and best practices. Document architectures, designs, and implementation decisions clearly and thoroughly. Identify opportunities for innovation and efficiency within existing and upcoming solutions. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Lead proof-of-concept initiatives to evaluate new technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to priorities the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Experience with Agile project management tools such as Jira and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with at least 3 years in a solution architecture or technical leadership role. Proficiency with AWS or GCP cloud platform. Strong implementation knowledge in JS tech stack, NodeJS, ReactJS, Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed technologies - Kafka, Spark, Trino or similar with proven experience in event-driven data pipelines. Proven experience with setting up big data pipelines to handle high volume transactions and transformations. Experience with BI tools - Looker, PowerBI, Metabase or similar. Experience with Data warehouses like BigQuery, Redshift, or similar. Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC (Terraform/CloudFormation). Good to Have: Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, TOGAF, etc. Experience setting up analytical pipelines using BI tools (Looker, PowerBI, Metabase or similar) and low-level Python tools like Pandas, Numpy, PyArrow Experience with data transformation tools like DBT, SQLMesh or similar. Experience with data orchestration tools like Apache Airflow, Kestra or similar. Work Environment Details: About Affle: Affle is a global technology company with a proprietary consumer intelligence platform that delivers consumer engagement, acquisitions, and transactions through relevant Mobile Advertising. The platform aims to enhance returns on marketing investment through contextual mobile ads and also by reducing digital ad fraud. While Affle's Consumer platform is used by online & offline companies for measurable mobile advertising, its Enterprise platform helps offline companies to go online through platform-based app development, enablement of O2O commerce and through its customer data platform. Affle India successfully completed its IPO in India on 08. Aug.2019 and now trades on the stock exchanges (BSE: 542752 & NSE:AFFLE). Affle Holdings is the Singapore based promoter for Affle India and its investors include Microsoft, Bennett Coleman &Company (BCCL) amongst others. For more details: www.affle.com About BU : Ultra - Access deals, coupons, and walled gardens based user acquisition on a single platform to offer bottom-funnel optimization across multiple inventory sources. For more details, please visit: https://www.ultraplatform.io/ Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description AJA Consulting Services LLP, founded by Phaniraj Jaligama, is committed to empowering youth and creating employment opportunities in both IT and non-IT sectors. With a focus on skill development, AJA provides exceptional resource augmentation, staffing solutions, interns pool management, and corporate campus engagements for a diverse range of clients. Through its flagship CODING TUTOR platform, AJA trains fresh graduates and IT job seekers in full-stack development, enabling them to transition seamlessly into industry roles. Based in Hyderabad, AJA operates from a state-of-the-art facility in Q City. Role Description We're hiring a Senior DevOps/Site Reliability Engineer with 5–6 years of hands-on experience in managing cloud infrastructure, CI/CD pipelines, and Kubernetes environments. You’ll also mentor junior engineers and lead real-time DevOps initiatives. 🔧 What You’ll Do *Build and manage scalable, fault-tolerant infrastructure (AWS/GCP/Azure) *Automate CI/CD with Jenkins, Github Actions or CircleCI *Work with IaC tools: Terraform, Ansible, CloudFormation *Set up observability with Prometheus, Grafana, Datadog *Mentor engineers on best practices, tooling, and automation ✅ What You Bring *5–6 years in DevOps/SRE roles *Strong scripting (Bash/Python/Go) and automation skills *Kubernetes & Docker expertise *Experience in production monitoring, alerting, and RCA *Excellent communication and team mentorship skills 💡 Bonus: GitOps, Service Mesh, ELK/EFK, Vault 📩 Apply now by emailing your resume to a.malla@ajacs.in Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are looking for a Senior DevOps Engineer to join our Life Sciences & Healthcare DevOps team . This is an exciting opportunity to work on cutting-edge Life Sciences and Healthcare products in a DevOps environment. If you love coding in Python or any scripting language, have experience with Linux, and ideally have worked in a cloud environment, we’d love to hear from you! We specialize in container orchestration, Terraform, Datadog, Jenkins, Databricks, and various AWS services. If you have experience in these areas, we’d be eager to connect with you. About You – Experience, Education, Skills, And Accomplishments At least 7+ years of professional software development experience and 5+ years as DevOps Engineer or similar skillsets with experience on various CI/CD and configuration management tools e.g., Jenkins, Maven, Gradle, Jenkins, Spinnaker, Docker, Packer, Ansible, Cloudformation, Terraform, or similar CI/CD orchestrator tool(s). At least 3+ years of AWS experience managing resources in some subset of the following services: S3, ECS, RDS, EC2, IAM, OpenSearch Service, Route53, VPC, CloudFront, Glue and Lambda. 5+ years of experience with Bash/Python scripting. Wide knowledge in operating system administration, programming languages, cloud platform deployment, and networking protocols Be on-call as needed for critical production issues. Good understanding of SDLC, patching, releases, and basic systems administration activities It would be great if you also had AWS Solution Architect Certifications Python programming experience. What will you be doing in this role? Design, develop and maintain the product's cloud infrastructure architecture, including microservices, as well as developing infrastructure-as-code and automated scripts meant for building or deploying workloads in various environments through CI/CD pipelines. Collaborate with the rest of the Technology engineering team, the cloud operations team and application teams to provide end-to-end infrastructure setup Design and deploy secure, resilient, and scalable Infrastructure as Code per our developer requirements while upholding the InfoSec and Infrastructure guardrails through code. Keep up with industry best practices, trends, and standards and identifies automation opportunities, designs, and develops automation solutions that improve operations, efficiency, security, and visibility. Ownership and accountability of the performance, availability, security, and reliability of the product/s running across public cloud and multiple regions worldwide. Document solutions and maintain technical specifications Product you will be developing The Products rely on container orchestration (AWS ECS,EKS), Jenkins, various AWS services (such as Opensearch, S3, IAM, EC2, RDS,VPC, Route53, Lambda, Cloudfront), Databricks, Datadog, Terraform and you will be working to support the Development team build them. About The Team Life Sciences & HealthCare Content DevOps team mainly focus on DevOps operations on Production infrastructure related to Life Sciences & HealthCare Content products. Our team consists of five members and reports to the DevOps Manager. We as a team provides DevOps support for almost 40+ different application products internal to Clarivate and which are source for customer facing products. Also, responsible for Change process on production environment. Incident Management and Monitoring. Team also handles customer raised /internal user service requests. Hours of Work Shift timing 12PM to 9PM. Must provide On-call support during non-business hours per week based on team bandwidth At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description : Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 4 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, Agentic Framework to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 4 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models Utilize optimization tools and techniques, including MIP (Mixed Integer Programming. Deep knowledge of classical AIML (regression, classification, time series, clustering) Drive DevOps and MLOps practices, covering CI/CD and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies