Home
Jobs

7494 Terraform Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Indeed logo

Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 26-Nov-2025 About the role This role is ideal for a proactive go-getter who is eager to drive new technology adoption within the organizationFamiliarity with current monitoring and logging tools like NewRelic and Splunk is essentialThis role will work closely with Infrastructure as Code (IAC) tooling like Terraform and will have a strong understanding of open telemetry standardsThe Observability Engineer is a critical role in our organization, dedicated to ensuring the robustness, performance, and scalability of our infrastructure and applications through superior monitoring and observability practices What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Lead the design and implementation of observability solutions that provide deep insights into application performance, system health, and user experience. Establish and advocate for observability best practices across engineering teams. Work closely with the infrastructure teams to automate and optimize infrastructure provisioning and scaling using IAC tools like Terraform. Ensure infrastructure code is tested, reliable, and efficient. Champion the adoption of open telemetry standards to collect, process, and export telemetry data. Utilize and integrate monitoring tools like Dynatrace and Splunk to provide thorough insights and analytics. Drive the evaluation and adoption of new tools and technologies to keep the organization at the forefront of observability and monitoring practices. Collaborate with various engineering teams to ensure smooth adoption and transition to new technologies. Analyze existing monitoring and observability practices, identifying areas for improvement or optimization. You will need Foster a culture of continuous learning and improvement within the observability team and across the organization. Provide leadership, guidance, and mentoring to the observability team. Foster a collaborative and inclusive environment that encourages innovation and growth. About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built. At Tesco, inclusion is at the heart of everything we do. We believe in treating everyone fairly and with respect, valuing individuality to create a true sense of belonging. It’s deeply embedded in our values — we treat people how they want to be treated. Our goal is to ensure all colleagues feel they can be themselves at work and are supported to thrive. Across the Tesco group, we are building an inclusive workplace that celebrates the diverse cultures, personalities, and preferences of our colleagues — who, in turn, reflect the communities we serve and drive our success. At Tesco India, we are proud to be a Disability Confident Committed Employer, reflecting our dedication to creating a supportive and inclusive environment for individuals with disabilities. We offer equal opportunities to all candidates and encourage applicants with disabilities to apply. Our fully accessible recruitment process includes reasonable adjustments during interviews - just let us know what you need. We are here to ensure everyone has the chance to succeed. We believe in creating a work environment where you can thrive both professionally and personally. Our hybrid model offers flexibility - spend 60% of your week collaborating in person at our offices or local sites, and the rest working remotely. We understand that everyone’s journey is different, whether you are starting your career, exploring passions, or navigating life changes. Flexibility is core to our culture, and we’re here to support you. Feel free to talk to us during your application process about any support or adjustments you may need.

Posted 2 days ago

Apply

0.0 - 5.0 years

0 Lacs

Delhi, Delhi

Remote

Indeed logo

Location : Remote Experience : 3-5 years About the Job This is a full-time role for a Senior Backend Developer (SR1) specializing in Node.js . We are seeking an experienced developer with deep JavaScript/TypeScript expertise to lead technical initiatives, design robust architectures, and mentor team members. In this role, you'll provide technical leadership, implement complex features, and drive engineering excellence across projects. A strong emphasis is placed on candidates who not only understand but actively implement best practices in testing and object-oriented design to build highly reliable and maintainable systems. The job location is flexible with preference for the Delhi NCR region. Responsibilities Design and plan efficient solutions for complex problems, ensuring scalability and security, applying principles of robust software design and testability. Independently lead teams or initiatives, ensuring alignment with project goals. Prioritize and maintain quality standards, focusing on performance, security, and reliability, including advocating for and ensuring strong unit and functional test coverage. Identify and resolve complex issues, ensuring smooth project progress. Facilitate discussions to align team members on best practices and standards. Promote continuous improvement through effective feedback and coaching. Guide and mentor team members, providing support for their professional growth. Contribute to talent acquisition and optimize team processes for better collaboration. Lead complex project components from design to implementation. Provide technical project guidance and develop risk mitigation strategies. Drive technical best practices and implement advanced performance optimizations. Design scalable, efficient architectural solutions for backend systems. Propose innovative technological solutions aligned with business strategies. Develop internal training materials and knowledge sharing resources. Requirements Technical Skills Bachelor's or Master's degree in Computer Science, Engineering, or related field. 3-5 years of professional experience in Node.js backend development. Proven experience in designing and implementing comprehensive unit and functional tests for backend applications, utilizing frameworks like Jest, Mocha, Supertest, or equivalent. Solid understanding and practical application of Object-Oriented Design Patterns (e.g., Singleton, Factory, Strategy, Observer, Decorator) in building scalable, flexible, and maintainable Node.js applications. Expert knowledge of advanced debugging techniques (Node Inspector, async hooks, memory leak detection). Mastery of advanced TypeScript patterns including utility types and mapped types. Deep understanding of API security including JWT, OAuth, rate limiting, and CORS implementation. Extensive experience with caching strategies using Redis/Memcached. Proficiency with HTTP caching mechanisms including Cache-Control headers and ETags. Strong knowledge of security protocols including HTTPS, TLS/SSL, and data encryption methods (bcrypt, Argon2). Experience with static analysis tools for code quality and security. Solid understanding of GraphQL fundamentals including queries, mutations, and resolvers. Experience with message brokers like RabbitMQ, Kafka, or NATS for distributed systems. Proficiency with cloud providers (AWS, GCP, Azure) and their core services. Experience with serverless frameworks including AWS Lambda, Google Cloud Functions, or Azure Functions. Knowledge of cloud storage and database solutions like DynamoDB, S3, or Firebase. Expertise in logging and monitoring security incidents and system performance. Soft Skills Excellent cross-functional communication skills with ability to translate complex technical concepts. Technical leadership in discussions and decision-making processes. Effective knowledge transfer abilities through documentation and mentoring. Strong mentorship capabilities for junior and mid-level team members. Understanding of broader business strategy and ability to align technical solutions accordingly. Ability to lead complex project components and provide technical guidance. Strong problem-solving skills and systematic approach to troubleshooting. Effective risk assessment and mitigation planning. Collaborative approach to working with product, design, and frontend teams. Proactive communication style with stakeholders and team members. Ability to balance technical debt, feature development, and maintenance needs. Additional Preferred Qualifications Experience with load balancing and horizontal/vertical scaling strategies. Knowledge of database optimization techniques including connection pooling, replication, and sharding. Proficiency with Node.js performance tuning, including streams and async optimizations. Knowledge of advanced access control systems such as Attribute-based access control (ABAC) and OpenID Connect. Experience with CDN configuration and server-side caching strategies. Knowledge of event-driven architecture patterns and Command Query Responsibility Segregation (CQRS). Experience with load testing tools like k6 or Artillery. Familiarity with Infrastructure as Code using Terraform or Pulumi. Contributions to open-source projects or advanced technical certifications. Experience leading major feature implementations or system migrations.

Posted 2 days ago

Apply

0.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Job ID R-229455 Date posted 06/17/2025 Job Title: Consultant - Platform Engineer Career Level - C3 Introduction to role AstraZeneca is seeking an IT Integration Engineer to join our R&D IT Team. This role involves managing and maintaining our GxP-compliant product integrations, which are part of our Clinical Development Platforms and used across all therapeutic areas. As a member of our OCD Integration team, you will collaborate with Product Leads, DevOps Leads, and technical engineers to drive innovation and efficiency. Accountabilities Key Responsibilities: Build Integration pipelines in alignment with Standard architectural patterns. Create reusable artefacts wherever possible. Build User Guides and Best-practice for Tool adoption and usage. Utilize vendor-based products to build optimal solutions. Provide full-lifecycle tooling guidance and reusable artefacts to product teams. Collaborate with Integration Lead and vendor teams to build and manage integrations. Participate in continuous improvement discussions with business and IT stakeholders as part of the scrum team. Solve day-to-day BAU Tickets. Essential Skills/Experience At least 4-6 years of experience & hands-on in Snaplogic & its associated Snap Packs. At least 2+ years of experience related to API Terminologies and API Integration – preferably Mulesoft. Excellent SQL knowledge pertaining to Relational databases including RDS, Redshift, PostGres, DB2, Microsoft SQL Server, DynamoDB. Good understanding of ETL pipeline design. Adherence to IT Service Delivery Framework on AD activities. Ability to organize and prioritize work, meet deadlines, and work independently. Strong problem-solving skills. Experience with process tools (JIRA, Confluence). Ability to work independently and collaborate with people across the globe with diverse cultures and backgrounds. Experience working in agile teams using methodologies such as SCRUM, Kanban, and SAFe. Experience in integrating CI/CD processes into existing Change & Configuration Management scope (i.e., ServiceNow & Jira). Desirable Skills/Experience ITIL practices (change management, incident and problem management, and others). Experience in GxP or SOx regulated environments. Proficiency in developing, deploying, and debugging cloud-based applications using AWS. Exposure to AWS Cloud Engineering and CI/CD tools (such as Ansible, GitHub Actions, Jenkins). Exposure to Infrastructure As Code (CloudFormation, Terraform). Good understanding of AWS networking and security configuration. Passion for learning, innovating, and delivering valuable software to people. Experience with dynamic dashboards (e.g., PowerBI). Experience in Python programming. - Experience with Snaplogic & Mulesoft integration platforms - Administration (nice to have). When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. We are a purpose-led global organization that pushes the boundaries of science to discover and develop life-changing medicines. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining cutting-edge science with leading digital technology platforms and data. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. Ready to make a difference? Apply now! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. Consultant - Platform Engineer Posted date Jun. 17, 2025 Contract type Full time Job ID R-229455 APPLY NOW Why choose AstraZeneca India? Help push the boundaries of science to deliver life-changing medicines to patients. After 45 years in India, we’re continuing to secure a future where everyone can access affordable, sustainable, innovative healthcare. The part you play in our business will be challenging, yet rewarding, requiring you to use your resilient, collaborative and diplomatic skillsets to make connections. The majority of your work will be field based, and will require you to be highly-organised, planning your monthly schedule, attending meetings and calls, as well as writing up reports. Who do we look for? Calling all tech innovators, ownership takers, challenge seekers and proactive collaborators. At AstraZeneca, breakthroughs born in the lab become transformative medicine for the world's most complex diseases. We empower people like you to push the boundaries of science, challenge convention, and unleash your entrepreneurial spirit. You'll embrace differences and take bold actions to drive the change needed to meet global healthcare and sustainability challenges. Here, diverse minds and bold disruptors can meaningfully impact the future of healthcare using cutting-edge technology. Whether you join us in Bengaluru or Chennai, you can make a tangible impact within a global biopharmaceutical company that invests in your future. Join a talented global team that's powering AstraZeneca to better serve patients every day. Success Profile Ready to make an impact in your career? If you're passionate, growth-orientated and a true team player, we'll help you succeed. Here are some of the skills and capabilities we look for. 0% Tech innovators Make a greater impact through our digitally enabled enterprise. Use your skills in data and technology to transform and optimise our operations, helping us deliver meaningful work that changes lives. 0% Ownership takers If you're a self-aware self-starter who craves autonomy, AstraZeneca provides the perfect environment to take ownership and grow. Here, you'll feel empowered to lead and reach excellence at every level — with unrivalled support when you need it. 0% Challenge seekers Adapting and advancing our progress means constantly challenging the status quo. In this dynamic environment where everything we do has urgency and focus, you'll have the ability to show up, speak up and confidently take smart risks. 0% Proactive collaborators Your unique perspectives make our ambitions and capabilities possible. Our culture of sharing ideas, learning and improving together helps us consistently set the bar higher. As a proactive collaborator, you'll seek out ways to bring people together to achieve their best. Responsibilities Job ID R-229455 Date posted 06/17/2025 Job Title: Consultant - Platform Engineer Career Level - C3 Introduction to role AstraZeneca is seeking an IT Integration Engineer to join our R&D IT Team. This role involves managing and maintaining our GxP-compliant product integrations, which are part of our Clinical Development Platforms and used across all therapeutic areas. As a member of our OCD Integration team, you will collaborate with Product Leads, DevOps Leads, and technical engineers to drive innovation and efficiency. Accountabilities Key Responsibilities: Build Integration pipelines in alignment with Standard architectural patterns. Create reusable artefacts wherever possible. Build User Guides and Best-practice for Tool adoption and usage. Utilize vendor-based products to build optimal solutions. Provide full-lifecycle tooling guidance and reusable artefacts to product teams. Collaborate with Integration Lead and vendor teams to build and manage integrations. Participate in continuous improvement discussions with business and IT stakeholders as part of the scrum team. Solve day-to-day BAU Tickets. Essential Skills/Experience At least 4-6 years of experience & hands-on in Snaplogic & its associated Snap Packs. At least 2+ years of experience related to API Terminologies and API Integration – preferably Mulesoft. Excellent SQL knowledge pertaining to Relational databases including RDS, Redshift, PostGres, DB2, Microsoft SQL Server, DynamoDB. Good understanding of ETL pipeline design. Adherence to IT Service Delivery Framework on AD activities. Ability to organize and prioritize work, meet deadlines, and work independently. Strong problem-solving skills. Experience with process tools (JIRA, Confluence). Ability to work independently and collaborate with people across the globe with diverse cultures and backgrounds. Experience working in agile teams using methodologies such as SCRUM, Kanban, and SAFe. Experience in integrating CI/CD processes into existing Change & Configuration Management scope (i.e., ServiceNow & Jira). Desirable Skills/Experience ITIL practices (change management, incident and problem management, and others). Experience in GxP or SOx regulated environments. Proficiency in developing, deploying, and debugging cloud-based applications using AWS. Exposure to AWS Cloud Engineering and CI/CD tools (such as Ansible, GitHub Actions, Jenkins). Exposure to Infrastructure As Code (CloudFormation, Terraform). Good understanding of AWS networking and security configuration. Passion for learning, innovating, and delivering valuable software to people. Experience with dynamic dashboards (e.g., PowerBI). Experience in Python programming. - Experience with Snaplogic & Mulesoft integration platforms - Administration (nice to have). When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. We are a purpose-led global organization that pushes the boundaries of science to discover and develop life-changing medicines. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining cutting-edge science with leading digital technology platforms and data. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. Ready to make a difference? Apply now! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. APPLY NOW Explore the local area Take a look at the map to see what’s nearby. Reasons to Join Thomas Mathisen Sales Representative Oslo, Norway Christine Recchio Sales Representative California, United States Stephanie Ling Sales Representative Petaling Jaya, Malaysia What we offer We're driven by our shared values of serving people, society and the planet. Our people make this possible, which is why we prioritise diversity, safety, empowerment and collaboration. Discover what a career at AstraZeneca could mean for you. Lifelong learning Our development opportunities are second to none. You'll have the chance to grow your abilities, skills and knowledge constantly as you accelerate your career. From leadership projects and constructive coaching to overseas talent exchanges and global collaboration programmes, you'll never stand still. Autonomy and reward Experience the power of shaping your career how you want to. We are a high-performing learning organisation with autonomy over how we learn. Make big decisions, learn from your mistakes and continue growing — with performance-based rewards as part of the package. Health and wellbeing An energised work environment is only possible when our people have a healthy work-life balance and are supported for their individual needs. That's why we have a dedicated team to ensure your physical, financial and psychological wellbeing is a top priority. Inclusion and diversity Diversity and inclusion are embedded in everything we do. We're at our best and most creative when drawing on our different views, experiences and strengths. That's why we're committed to creating a workplace where everyone can thrive in a culture of respect, collaboration and innovation.

Posted 2 days ago

Apply

1.0 - 3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About The Job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure to solve cloud platform tests. Work with customers to design and implement cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About The Job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure to solve cloud platform tests. Work with customers to design and implement cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 2 days ago

Apply

1.0 - 3.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 days ago

Apply

1.0 - 3.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Job Title : Google-Certified Engineer Location : Open Location Type : Full-time Experience Level : Senior-Level Notice Period : Maximum 30 Days Job Summary We are looking for a Google-Certified Engineer with expertise in Google Cloud technologies, certified in one or more professional-level Google Cloud certifications. The ideal candidate will be responsible for designing, implementing, and optimizing cloud-based solutions across multiple domains, including architecture, security, DevOps, data engineering, machine learning, and networking. Key Responsibilities Design and implement scalable, secure, and high-performance cloud solutions on Google Cloud Platform (GCP). Work on cloud infrastructure, automation, and DevOps best practices for CI/CD pipelines. Develop and optimize cloud databases, data pipelines, and AI/ML models. Ensure network security, compliance, and risk mitigation across cloud deployments. Architect and deploy highly available cloud networks and security frameworks. Collaborate with cross-functional teams for cloud migrations, automation, and AI/ML implementations. Troubleshoot and optimize cloud applications for cost efficiency, security, and performance. Required Certifications (Any of the following Google Cloud Certifications) : Professional Cloud Architect Professional Cloud Database Engineer Professional Cloud Developer Professional Cloud DevOps Engineer Professional Cloud Network Engineer Professional Cloud Security Engineer Professional Data Engineer Professional Machine Learning Engineer SecOps Tech Credential Maps Technical Credential Certification IDS : CERT-P-CA CERT-P-CDBE CERT-P-CD CERT-P-CNE CERT-P-CSE CERT-P-DE CERT-P-CDE CERT-P-MLE CERT-T-STC CERT-T-MDV Required Skills & Experience 4+ years of experience in Google Cloud Platform (GCP). Strong hands-on experience with GCP services like Compute Engine, Kubernetes Engine, BigQuery, Cloud SQL, Cloud Spanner, IAM, Cloud Run, Cloud Functions, etc. Expertise in Terraform, Kubernetes, Docker, CI/CD, and Infrastructure as Code (IaC). Proficiency in Python, Java, or Go for cloud-based development. Strong understanding of networking, cloud security, and IAM policies. Experience in AI/ML, data engineering, or cloud automation is a plus. Good knowledge of Google Cloud best practices, cost optimization, and performance tuning. Preferred Qualifications Experience in financial services, fintech, or regulated industries is a plus. Hands-on expertise in logging, monitoring, and security compliance frameworks. Excellent problem-solving skills and ability to work in cross-functional teams. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

What makes this role special Join a green-field Enterprise solutions project that spans cloud-infra, data pipelines, QA automation, BI dashboards and business process analysis. Spend your first year rotating through four pods, discovering where you shine, then lock into the stream you love (DevOps, Data Engineering, QA, BI, or Business Analysis). Work side-by-side with senior architects and PMs; demo every Friday; leave with production-grade experience most freshers wait years to gain. Rotation roadmap (three months each) DevOps Starter – write Terraform variables, tweak Helm values, add a GitHub Action that auto-lints PRs. Data Wrangler – build a NiFi flow (CSV → S3 Parquet), add an Airflow DAG, validate schemas with Great Expectations. QA Automation – write PyTest cases for the WhatsApp bot, create a k6 load script, plug Allure reports into CI. BI / Business Analysis – design a Superset dataset & dashboard, document KPIs, shadow the PM to craft a user story and UAT sheet. Day-to-day you will Pick tickets from your pod’s board and push clean pull-requests or dashboard changes. Pair with mentors, record lessons in the wiki, and improve run-books as you go. Demo your work (max 15 min) in our hybrid Friday huddle. Must-have spark Basic coding in Python or JavaScript and Git fundamentals (clone → branch → PR). Comfortable with SQL JOINs & GROUP BY and spreadsheets for quick analysis. Curious mindset, clear written English, happy to ask “why?” and own deadlines. Bonus points A hobby Docker or AWS free-tier project. A Telegram/WhatsApp bot or hackathon win you can show. Contributions to open-source or a college IoT demo. What success looks like Ship at least twelve merged PRs/dashboards in your first quarter. Automate one manual chore the seniors used to dread. By month twelve you can independently take a user story from definition → code or spec → test → demo. Growth path Junior ➜ Associate II ➜ Senior (lead a pod); pay and AWS certifications climb with you. How to apply Fork github.com/company/erpnext-starter, fix any “good-first-issue”, open a PR. Email your resume, PR link, and a 150-word story about the coolest thing you’ve built. Short-listed candidates get a 30-min Zoom chat (no riddles) and a 24-hr mini-task aligned to your preferred first rotation. We hire attitude over pedigree—show you learn fast, document clearly, and love building, and you’re in. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Operations Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in troubleshooting and optimizing applications to enhance performance and user experience, while adhering to best practices in software development. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in continuous learning to stay updated with the latest technologies and methodologies. - Quickly identify, troubleshoot, and fix failures to minimize downtime. - To ensure the SLAs and OLAs are met within the timelines such that operation excellence is met. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Operations. - Strong understanding of cloud architecture and services. - Experience with application development frameworks and tools. - Familiarity with DevOps practices and CI/CD pipelines. - Ability to troubleshoot and resolve application issues efficiently. - Strong understanding of cloud networking concepts including VPC design, subnets, routing, security groups, and implementing scalable solutions using AWS Elastic Load Balancer (ALB/NLB). - Practical experience in setting up and maintaining observability tools such as Prometheus, Grafana, CloudWatch, ELK stack for proactive system monitoring and alerting. - Hands-on expertise in containerizing applications using Docker and deploying/managing them in orchestrated environments such as Kubernetes or ECS. - Proven experience designing, deploying, and managing cloud infrastructure using Terraform, including writing reusable modules and managing state across environments. - Good problem solving skills - The ability to quickly identify, analyze, and resolve issues is vital. - Effective Communication - Strong communication skills are necessary for collaborating with cross- functional teams and documenting processes and changes. - Time Management - Efficiently managing time and prioritizing tasks is vital in operations support. - The candidate should have minimum 3 years of experience in AWS Operations. Additional Information: - This position is based at our Hyderabad office. - A 15 years full time education is required. 15 years full time education Show more Show less

Posted 2 days ago

Apply

10.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

The DevOps Engineer III will play a crucial role in enhancing our CI/CD pipeline, ensuring seamless integration and deployment of applications, and maintaining robust infrastructure. The DevOps Engineer III will work closely with the Development team and our Infrastructure team to develop our CI/CD stack and implement monitoring and observability into our applications. Position: DevOps Lead Exp: 10-15 Years Primary Duties & Responsibilities: Design and implement complex AWS-enabled CI/CD platforms. Be available to assist the Development team to enable them to fully utilize the CI/CD pipeline. Evaluate, recommend and implement tools and technologies for DevOps. Manage code deployments, fixes, updates and related processes. Interact heavily with Management and the Development, Infrastructure and Security teams. Work closely with the Development team to enable them to fully utilize AWS technologies Actively troubleshoot any issues that arise during testing and production, catching and solving issues before launch. Assist with the automation our operational processes as needed, with accuracy and in compliance with our security requirements. Spread knowledge on an ongoing basis to all members of the software/IT team. Push for DevOps concepts whenever possible. Required Knowledge, Skills and Abilities: Ten (10) years’ experience in DevOps or a related field Extensive experience with core AWS platform architecture Strong experience with Linux-based infrastructures, Linux administration, and AWS Strong scripting capability and ability to develop scripted AWS infrastructure Cloud Automation experience with tools such as Terraform, CloudFormation, Ansible, etc. Experience with CI/CD pipeline tools such as Octopus Deploy, ArgoCD, GitHub Actions, etc. Knowledge of scripting languages such as Perl, Ruby, Python, Bash Familiarity with Microservice based architectures Strong familiarity with containerization technologies, such as Docker, Kubernetes, EKS, etc. Extensive troubleshooting skills with the ability to spot issues before they become problems Experience with project management and workflow tools such as Jira Ability to work independently and manage multiple projects and processes to achieve commitments Ability to assist the DevOps Manager in leading the team on projects as necessary Excellent interpersonal and communication (verbal and written) skills to all levels of the organization Process and technical documentation skills Time and project management skills, with the capability to prioritize and multitask as needed Show more Show less

Posted 2 days ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Summary The DevOps & AWS Engineer should have hands-on experience in building, deploying, and managing cloud infrastructure and automation processes. The candidate should have strong expertise in Amazon Web Services (AWS), as well as proficiency in CI/CD pipeline development, infrastructure as code (IaC), and monitoring. The candidate will play a crucial role in ensuring smooth and efficient development workflows, system scalability, and high availability of services across cloud environments. This role requires strong communication skills, problem-solving abilities, and a collaborative mindset. Essential Job Functions And Operations Assist developers by managing setup processes and minimizing setup-related tasks. Implement, configure, and manage cloud-based infrastructure using Amazon Web Services (AWS) such as EC2, S3, RDS, ECS, Lambda, VPC, IAM, and other AWS services. Work with Java, Spring Boot, and frontend technologies (React, Vue.js). Manage deployment artifacts including JAR files, HTML, CSS, JS files, and Dockerfiles. Prepare scripts for database schema and migration. Create and maintain Kubernetes templates and Jenkins files. Write and manage Terraform or AWS CloudFormation templates to automate the provisioning and configuration of AWS resources. Manage Kubernetes clusters, including services, deployments, and ingress. Support networking tasks, including virtual networking, VPCs, routing, DNS, and HTTPS/TLS. Automate deployment processes using Jenkins and other tools. Oversee application upgrades, database migrations, and rollback procedures. Design and implement disaster recovery plans in AWS to ensure high availability and fault tolerance of critical applications. Automate deployments using tools like Terraform, Ansible, or AWS CodeDeploy. Ensure smooth transitions from development to staging and production environments. Set up AWS CloudWatch to monitor AWS resources and applications in real-time. Create custom dashboards and set up alerts to proactively monitor system health. Configure and maintain IAM roles, policies, and groups to ensure secure access to AWS resources. Cost and resource optimization to ensure cloud efficiency. Required Skills And Abilities Proficient in cloud platforms, specifically AWS. Strong understanding of AWS IAM roles, policies, users, groups, and permission management. Expertise in launching, managing, and scaling EC2 instances. Strong experience with Docker, Kubernetes, Terraform, and Jenkins. Expertise in Java, Spring Boot, and frontend technologies (React, Vue.js). Solid understanding of networking concepts (OSI model, TCP/IP, VPC, DNS). Experience with infrastructure automation, including scripting (Bash, Python). Familiarity with secure coding practices and vulnerability management. Strong communication skills, both verbal and written. Proven ability to manage code changes using Git, including branching and merging. Ability to create and maintain documentation for changes, database schemas, and APIs. Proficient in setting up CloudWatch for monitoring, logging, and alerting. Expertise in CI tools like Jenkins, GitLab CI, for automating build and test pipelines. Experience with CD tools like Jenkins, AWS CodeDeploy, or ArgoCD for automating the deployment process. Strong understanding of designing and managing CI/CD pipelines to automate build, test, and deployment processes Financial Responsibilities Monitor and manage costs associated with cloud infrastructure on AWS. Optimize resource usage to ensure cost-efficiency. Supervisory Responsibilities Mentor Junior Developers, Testers and provide guidance on DevOps best practices. Education Bachelors degree in Computer Science, Engineering, or a related field. Experience And Qualifications Minimum of 3-5 years of experience in DevOps role with a focus on cloud platforms (AWS). Demonstrated experience with GIT, Jenkins, Docker, EKS, IAM, VPC, EC2 and Terraform. Strong background in managing infrastructure and deploying applications in cloud environments. Experience using Helm for managing Kubernetes applications and simplifying deployments. Hand-on Experience in implementing CICD pipelines. Experience with secure coding practices and vulnerability management. Ability to work effectively in a collaborative team environment. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Our Company Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We design intelligent platforms that solve complex business problems and deliver measurable impact through cutting-edge AI Overview : We are seeking an experienced Solution Architect with a strong foundation in software architecture and a working knowledge of AI-based products or platforms. In this role, you will be responsible for designing robust, scalable, and secure architectures that support AI-driven applications and enterprise systems. You will work closely with cross-functional teamsincluding data scientists, product managers, and engineering leadsto bridge the gap between business needs, technical feasibility, and AI we are looking from an ideal candidate ? Architect end-to-end solutions for enterprise and product-driven platforms, including components such as data pipelines, APIs, AI model integration, cloud infrastructure, and user interfaces. Guide teams in selecting the right technologies, tools, and design patterns to build scalable systems. Collaborate with AI/ML teams to understand model requirements and ensure smooth deployment and integration into production. Define system architecture diagrams, data flow, service orchestration, and infrastructure provisioning using modern tools. Work closely with stakeholders to translate business needs into technical solutions, with a focus on scalability, performance, and security. Provide leadership on best practices for software development, DevOps, and cloud-native architecture. Conduct architecture reviews and ensure alignment with security, compliance, and performance Skills : What skills do you need ? Requirements 10+ years of experience in software architecture or solution design roles. Proven experience designing systems using microservices, RESTful APIs, event-driven architecture, and cloud-native technologies. Hands-on experience with at least one major cloud provider: AWS, GCP, or Azure. Familiarity with AI/ML platforms or components, such as integrating AI models, MLOps pipelines, or inference services. Understanding of data architectures, including data lakes, streaming, and ETL pipelines. Strong experience with containerization (Docker, Kubernetes) and DevOps principles. Ability to lead technical discussions, make design trade-offs, and communicate with both technical and non-technical Qualifications : Exposure to AI model lifecycle management, prompt engineering, or real-time inference workflows. Experience with infrastructure-as-code (Terraform, Pulumi). Knowledge of GraphQL, gRPC, or serverless architectures. Previous experience working in AI-driven product companies or digital transformation We Offer : High-impact role in designing intelligent systems that shape the future of AI adoption. Work with forward-thinking engineers, researchers, and innovators. Strong focus on career growth, learning, and technical leadership. Compensation is not a constraint for the right candidate. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Description Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We build intelligent, scalable, and future-ready digital platforms that drive the next wave of AI-powered transformation. Role Overview We are seeking a highly skilled and experienced Senior Node.js Developer with 5+ years of hands-on experience in backend development. As part of our engineering team, you will be responsible for architecting and building scalable APIs, services, and infrastructure that power high-performance AI-driven applications. You'll collaborate with front-end developers, DevOps, and data teams to ensure fast, secure, and efficient back-end functionality that meets the needs of modern AI-first products. Key Responsibilities Design, build, and maintain scalable server-side applications and APIs using Node.js and related frameworks. Implement RESTful and GraphQL APIs for data-driven and real-time applications. Collaborate with front-end, DevOps, and data teams to build seamless end-to-end solutions. Optimize application performance, scalability, and security. Write clean, maintainable, and well-documented code. Integrate with third-party services and internal microservices. Apply best practices in code quality, testing (unit/integration), and continuous integration/deployment. Troubleshoot production issues and implement monitoring and alerting solutions. Requirements 5+ years of professional experience in backend development using Node.js. Proficiency in JavaScript (ES6+) and strong experience with Express.js, NestJS, or similar frameworks. Experience with SQL and NoSQL databases (e.g. , PostgreSQL, MySQL, MongoDB). Strong understanding of API security, authentication (OAuth2, JWT), and rate limiting. Experience building scalable microservices and working with message queues (e.g. , RabbitMQ, Kafka). Familiarity with containerized applications using Docker and orchestration via Kubernetes. Proficient in using Git, CI/CD pipelines, and version control best practices. Solid understanding of performance tuning, caching, and system design. Preferred Qualifications Experience in cloud platforms like AWS, GCP, or Azure. Exposure to building backends for AI/ML platforms, data pipelines, or analytics dashboards. Familiarity with GraphQL, WebSockets, or real-time communication. Knowledge of infrastructure-as-code tools like Terraform is a plus. Experience with monitoring tools like Prometheus, Grafana, or New Relic. What We Offer The chance to work on cutting-edge products leveraging AI and intelligent automation. A high-growth, innovation-driven environment with global exposure. Access to modern development tools and cloud-native technologies. Attractive compensation - no constraints for the right candidate. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

About The Role WebCraft IT is seeking a skilled Full Stack Developer with expertise in C#, ASP.NET, Azure, SQL, DevOps, and Data Structures & Algorithms (DSA). If you're passionate about building scalable applications and optimizing cloud-based solutions, we'd love to hear from you!. Key Responsibilities Develop and maintain applications using C#, ASP.NET, Node.js. Build & optimize SQL databases (queries, indexing, performance tuning). Implement Azure DevOps CI/CD pipelines and Infrastructure as Code (Terraform). Secure authentication and access management using EntraID (Azure AD). Debug, troubleshoot, and optimize backend performance. Must-Have Skills C#, ASP.NET, SQL, Azure DevOps, Terraform, EntraID. Strong command over Data Structures & Algorithms (DSA). Experience in building APIs & cloud solutions. Excellent problem-solving & debugging skills. Proficiency in Azure Storage services and infrastructure automation with Terraform. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc. Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g, GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc. Understanding of ML lifecycle management and reproducibility. Preferred Qualifications Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer Work on cutting-edge AI platforms and infrastructure. Cross-functional collaboration with top ML, research, and product teams. Competitive compensation package - no constraints for the right candidate. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We build intelligent, secure, and scalable digital platforms that power the future of AI across industries. Role Overview We are looking for a Senior Security Specialist with 8+ years of experience in cybersecurity, cloud security, and application security. You will be responsible for identifying, mitigating, and preventing threats across our technology landscape - particularly in AI-powered, data-driven environments. This role involves leading penetration testing efforts, managing vulnerability assessments, and implementing best-in-class security tools and practices to protect our platforms and clients. Key Responsibilities Design and implement robust security architectures for cloud-native and on-prem environments. Conduct penetration testing (internal/external, network, application, API) and deliver clear remediation strategies. Perform regular vulnerability assessments using industry-standard tools and frameworks. Lead threat modeling and risk assessments across systems, services, and data pipelines. Collaborate with development and DevOps teams to integrate security in SDLC and CI/CD pipelines (DevSecOps). Define and enforce security policies, incident response procedures, and access controls. Monitor for security breaches and investigate security events using SIEM and forensic tools. Ensure compliance with global standards such as ISO 27001, SOC 2, GDPR, and HIPAA. Provide guidance on secure implementation of AI/ML components and data protection strategies. Requirements 8+ years of experience in information security, application security, or cybersecurity engineering. Proficient in penetration testing methodologies and use of tools such as Burp Suite, Metasploit, Nmap, Wireshark, Nessus, OWASP ZAP, Qualys, etc. Deep experience in vulnerability management, patching, and security hardening practices. Strong understanding of OWASP Top 10, CWE/SANS Top 25, API security, and secure coding principles. Hands-on experience with cloud security (AWS, Azure, or GCP), IAM, firewalls, WAFs, encryption, and endpoint security. Familiarity with SIEM, EDR, IDS/IPS, and DLP solutions. Knowledge of DevSecOps and tools like Terraform, Kubernetes, Docker, etc. Excellent problem-solving, analytical, and incident-handling capabilities. Preferred Qualifications Certifications such as CISSP, CISM, CEH, OSCP, or AWS Security Specialty. Experience working on security aspects of AI/ML platforms, data pipelines, or model inferencing. Familiarity with governance and compliance frameworks (e.g, PCI-DSS, HIPAA). Experience in secure agile product environments and threat modeling techniques. What We Offer A mission-critical role securing next-gen AI systems. Opportunity to work with an innovative and fast-paced tech company. High visibility and leadership opportunities in a growing security function. Compensation is not a constraint for the right candidate. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

36.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Job Role We are on a mission to build scalable, high-performance systems, and were looking for a Backend Engineer (SDE II) who can design, build, and maintain services that power our core platform. Key Responsibilities Architect and implement scalable backend systems using Python (Django/FastAPI) and TypeScript (Node.js). Lead system design discussions and own the design of backend modules and infrastructure. Design and optimize PostgreSQL schemas and queries for performance and reliability. Build microservices and deploy them using Docker and Kubernetes. Drive DevOps best practices including CI/CD, infrastructure automation, and cloud deployment. Integrate and manage RabbitMQ for asynchronous processing and event-driven workflows. Set up and manage log aggregation, monitoring, and alerting using tools like Prometheus, Grafana, ELK stack. Conduct code reviews, share knowledge, and mentor junior engineers and interns. Proactively monitor and improve the reliability, scalability, and performance of backend systems. Collaborate with cross-functional teams on features, architecture, and tech strategy. Experience & Qualifications 36 years of experience in backend development with strong command of Python and TypeScript. Expertise in building web services and APIs using Django, FastAPI, or Node.js. Strong knowledge of relational databases, particularly PostgreSQL. Solid experience with Kubernetes and Docker for deploying and managing microservices. Experience in DevOps operations, CI/CD pipelines, and infrastructure as code. Proficiency in RabbitMQ or similar message queue technologies. Hands-on experience with monitoring, logging, and alerting stacks (e.g., ELK, Prometheus, Grafana). Strong system design skills able to design scalable, fault-tolerant, and maintainable systems. Familiarity with Git workflows, agile processes, and collaborative software development. Good To Have Experience with cloud platforms like AWS, Azure, or GCP. Knowledge of Helm, Terraform, or similar IaC tools. Understanding of GraphQL and streaming data pipelines (Kafka, Redis streams, etc.). Exposure to event-driven architectures and distributed systems. Publicly available GitHub contributions or tech blog posts. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Linkedin logo

Key Responsibilities Should be able to containerise the applications using Docker or Kubernetes. Can design, build & maintain software infrastructure for complete CI/CD for the build, deployment, and testing of the department's systems globally. Should be able to deploy and modify existing templates to deliver production-grade Cloud Infrastructure in AWS. Should be able to upgrade the applications, databases, etc. Should be able to do Infrastructure provisioning using Terraform. Should be able to write scripts and automate manual tasks using Ansible, Shell Scripting, Python, Ansible Tower, etc. Build and scale the technology infrastructure to meet rapidly increasing demand. Monitor and troubleshoot platform issues. Required Skills 5+ years of relevant experience. Expertise in Docker. Experience in CI/CD tools like Jenkins, AWS Codepipeline, etc. Experience with Kubernetes (1.28+) and containerized applications. In-depth knowledge of AWS services and hands-on experience in AWS provisioning. Understanding of microservices architecture and debugging/investigation techniques. Strong understanding of systems, networking, and troubleshooting techniques. Experience in the automated build pipeline, continuous integration, and continuous deployment. Knowledge of cloud monitoring, logging, and cost management tools. Strong programming/scripting knowledge - Python, Go, Ansible. Shell Scripting. Ability to operate in an agile environment. Experience with other IaaS platforms is a plus. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Linkedin logo

Key Responsibilities Hands-on experience with multi-cloud environments (e.g., Azure, AWS, GCP) Design and maintain AWS infrastructure (EC2, S3, VPC, RDS, IAM, Lambda and other AWS services). Implement security best practices (IAM, GuardDuty, Security Hub, WAF). Configure and troubleshoot AWS networking and hybrid and url filtering solutions (VPC, TGW, Route 53, VPNs, Direct Connect). Experience managing physical firewall management (palo alto , cisco etc..) Manage , troubleshoot, Configure and optimize services like Apache, NGINX, and MySQL/PostgreSQL on Linux/Windows/ Ensure Linux/Windows server compliance with patch management and security updates. Provide L2/L3 support for Linux and Windows systems, ensuring minimal downtime and quick resolution of incidents Collaborate with DevOps, application, and database teams to ensure seamless integration of infrastructure solutions Automate tasks using Terraform, CloudFormation, or scripting (Bash, Python). Monitor and optimize cloud resources using CloudWatch, Trusted Advisor, and Cost : 4+ years of AWS experience and system administration in Linux & Windows. Proficiency in AWS networking, security, and automation tools. Certifications : AWS Solutions Architect (required), RHCSA/MCSE (preferred). Strong communication and problem-solving skills. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Design, build, and maintain scalable and efficient data pipelines to move data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python Implement and manage ETL/ELT processes to ensure seamless data integration and transformation Ensure information security and compliance with data governance standards Maintain and enhance data environments, including data lakes, warehouses, and distributed processing systems Utilize version control systems (e.g., GitHub) to manage code and collaborate effectively with the team Primary Skills Enhancements, new development, defect resolution, and production support of ETL development using AWS native services Integration of data sets using AWS services such as Glue and Lambda functions. Utilization of AWS SNS to send emails and alerts Authoring ETL processes using Python and PySpark ETL process monitoring using CloudWatch events Connecting with different data sources like S3 and validating data using Athena. Experience in CI/CD using GitHub Actions Proficiency in Agile methodology Extensive working experience with Advanced SQL and a complex understanding of SQL. Secondary Skills Experience working with Snowflake and understanding of Snowflake architecture, including concepts like internal and external tables, stages, and masking policies. Competencies / Experience Deep technical skills in AWS Glue (Crawler, Data Catalog): 10+ years. Hands-on experience with Python and PySpark: 5+ years. PL/SQL experience: 5+ years CloudFormation and Terraform: 5+ years CI/CD GitHub actions: 5+ year Experience with BI systems (PowerBI, Tableau): 5+ year Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda: 5+ years Additionally, familiarity with any of the following is highly desirable: Jira, Gi (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description We are seeking an experienced Azure Cloud & Database Administrator to join our dynamic team. The ideal candidate will be responsible for configuring, managing, and supporting Azure cloud services and SQL Server databases, along with handling cloud migrations and DevOps pipelines. This role requires a proactive professional with strong problem-solving skills and hands-on experience in cloud solutions, database administration, and Responsibilities : Configure, manage, and maintain various Azure services to ensure optimal performance and availability. Perform hands-on administration of SQL Server databases, including backup and disaster recovery planning. Manage and support Azure Data Lake (ADLS) and Azure Data Factory (ADF) solutions. Lead the migration of on-premises instances to sustainable and scalable cloud-based architectures. Design, implement, and manage pipelines using Terraform and Azure DevOps for infrastructure automation and deployment. Execute remediation activities, including the creation and configuration of new resources such as Azure Databricks (ADB), ADLS, and ADF. Configure mount points and manage storage solutions within Azure environments. Facilitate notebook migrations between different Databricks instances. Conduct proactive monitoring, troubleshooting, and resolution of real-time issues in Azure environments and SQL databases. Analyze system requirements, refine and automate recurring processes, and maintain clear documentation of changes and procedures. Collaborate with development teams to assist in query tuning and schema optimization. Provide 24x7 support for critical production systems to ensure minimal downtime and business continuity. Implement and manage security processes and services within Azure Storage and related cloud : Bachelor's degree in Information Technology, Computer Science, or a related field. Proven hands-on experience in Azure cloud services, database administration, and DevOps practices. Strong understanding of cloud migration, storage solutions, security best practices, and disaster recovery planning. Experience with infrastructure as code tools such as Terraform. Strong analytical and problem-solving abilities with attention to detail. Excellent communication and collaboration skills. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

3.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Data Engineer (3-4 Years Experience) - Real-time & Batch Processing | AWS, Kafka, Click House, Python Location : NOIDA. Experience : 3-4 years. Job Type : Full-Time. About The Role We are looking for a skilled Data Engineer with 3-4 years of experience to design, build, and maintain real-time and batch data pipelines for handling large-scale datasets. You will work with AWS, Kafka, Cloudflare Workers, Python, Click House, Redis, and other modern technologies to enable seamless data ingestion, transformation, merging, and storage. Bonus: If you have Web Data Analytics or Programmatic Advertising knowledge, it will be a big plus!. Responsibilities Real-Time Data Processing & Transformation : Build low-latency, high-throughput real-time pipelines using Kafka, Redis, Firehose, Lambda, and Cloudflare Workers. Perform real-time data transformations like filtering, aggregation, enrichment, and deduplication using Kafka Streams, Redis Streams, or AWS Lambda. Merge data from multiple real-time sources into a single structured dataset for analytics. Batch Data Processing & Transformation Develop batch ETL/ELT pipelines for processing large-scale structured and unstructured data. Perform data transformations, joins, and merging across different sources in Click House, AWS Glue, or Python. Optimize data ingestion, transformation, and storage workflows for efficiency and reliability. Data Pipeline Development & Optimization Design, develop, and maintain scalable, fault-tolerant data pipelines for real-time & batch processing. Optimize data workflows to reduce latency, cost, and compute load. Data Integration & Merging Combine real-time and batch data streams for unified analytics. Integrate data from various sources (APIs, databases, event streams, cloud storage). Cloud Infrastructure & Storage Work with AWS services (S3, EC2, ECS, Lambda, Firehose, RDS, Redshift, ClickHouse) for scalable data processing. Implement data lake and warehouse solutions using S3, Redshift, and ClickHouse. Data Visualization & Reporting Work with Power BI, Tableau, or Grafana to create real-time dashboards and analytical reports. Web Data Analytics & Programmatic Advertising (Big Plus!) : Experience working with web tracking data, user behavior analytics, and digital marketing datasets. Knowledge of programmatic advertising, ad impressions, clickstream data, and real-time bidding (RTB) analytics. Monitoring & Performance Optimization Implement monitoring & logging of data pipelines using AWS CloudWatch, Prometheus, and Grafana. Tune Kafka, Click House, and Redis for high performance. Collaboration & Best Practices Work closely with data analysts, software engineers, and DevOps teams to enhance data accessibility. Follow best practices for data governance, security, and compliance. Must-Have Skills Programming : Strong experience in Python and JavaScript. Real-time Data Processing & Merging : Expertise in Kafka, Redis, Cloudflare Workers, Firehose, Lambda. Batch Processing & Transformation : Experience with Click House, Python, AWS Glue, SQL-based transformations. Data Storage & Integration : Experience with MySQL, Click House, Redshift, and S3-based storage. Cloud Technologies : Hands-on with AWS (S3, EC2, ECS, RDS, Firehose, Click House, Lambda, Redshift). Visualization & Reporting : Knowledge of Power BI, Tableau, or Grafana. CI/CD & Infrastructure as Code (IaC) : Familiarity with Terraform, CloudFormation, Git, Docker, and Kubernetes. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Linkedin logo

About Quantanite Quantanite is a customer experience (CX) solutions company that helps fast-growing companies and leading global brands to transform and grow. We do this through a collaborative and consultative approach, rethinking business processes and ensuring our clients employ the optimal mix of automation and human intelligence. We are an ambitious team of professionals spread across four continents and looking to disrupt our industry by delivering seamless customer experiences for our clients, backed-up with exceptional results. We have big dreams, and are constantly looking for new colleagues to join us who share our values, passion and appreciation for diversity. About The Role As a DevOps Engineer you will work closely with our global teams to learn about the business and technical requirements and formulate the necessary infrastructure and resource plans to properly support the growth and maintainability of various systems. Key Responsibilities Implement a diverse set of development, testing, and automation tools, as well as manage IT infrastructure. Plan the team structure and activities, and actively participate in project management. Comprehend customer requirements and project Key Performance Indicators (KPIs). Manage stakeholders and handle external interfaces effectively. Set up essential tools and infrastructure to support project development. Define and establish DevOps processes for development, testing, release, updates, and support. Possess the technical expertise to review, verify, and validate software code developed in the project. Engage in software engineering tasks, including designing and developing systems to enhance reliability, scalability, and operational efficiency through automation. Collaborate closely with agile teams to ensure they have the necessary tools for seamless code writing, testing, and deployment, promoting satisfaction among development and QA teams. Monitor processes throughout their lifecycle, ensuring adherence, identifying areas for improvement, and minimizing wastage. Advocate and implement automated processes whenever feasible. Identify and deploy cybersecurity measures by continuously performing vulnerability assessments and managing risk. Handle incident management and conduct root cause analysis for continuous improvement. Coordinate and communicate effectively within the team and with customers. Build and maintain continuous integration (CI) and continuous deployment (CD) environments, along with associated processes and tools. About The Candidate Qualifications and Skills : Proven 5 years of experience with Linux based infrastructure and proficient in scripting language. Must have solid cloud computing skills such as network management, cloud computing and cloud databases in any one of the public clouds (AWS, Azure or GCP) Must have hands-on experience in setting up and managing cloud infrastructure like Kubernetes, VPC, VPN, Virtual Machines, Cloud Databases etc. Experience in IAC (Infrastructure as Code) tools like Ansible, Terraform. Must have hands-on experience in coding and scripting in at least one of the following : Shell, Python, Groovy Experience as a DevOps Engineer or similar software engineering role. Experienced in establishing an optimized CI / CD environment relevant to the project. Automation using scripting language like Perl/python and shell scripts like BASH and CSH. Good knowledge of configuration and building tools like Bazel, Jenkins etc. Good knowledge of repository management tools like Git, Bit Bucket etc. Good knowledge of monitoring solutions and generating insights for reporting Excellent debugging skills/strategies. Excellent communication skills. Experienced in working in an Agile environment. Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress : Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement : Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development : We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events : Regular team and organisation-wide get-togethers and events. Value orientation : Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies