Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 6.0 years
1 - 6 Lacs
Kochi, Coimbatore, Thiruvananthapuram
Hybrid
Hi, Urgent opening for Cloud Devops Support Engineer with EY GDS at Kochi/Trivandrum / Coimbatore Location. EXP :1-6 Yrs Location: Kochi/Trivandrum / Coimbatore Shift: Rotational Shift Mandatory Skills: DevOps, CI/CD, Terraform, Azure OR AWS Please apply if Available for Virtual Interview on weekdays. Staff (1-3Yrs) : https://careers.ey.com/job-invite/1609605/ Senior (3-6Yrs) : https://careers.ey.com/job-invite/1609598/
Posted 3 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications 3+ years of experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Experience in large scale application/server migration from on-premise to cloud. Good knowledge on Compute, Storage, Security and Networking technologies Good understanding and experience on dealing with firewalls, VPCs, network routing, Identity and Access Management and security implementation Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Solutions Architect Professional, DevOps Engineer Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in assessment of source architecture and map it to relevant target architecture in the cloud environment with knowledge on capacity and performance management Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS Proserve IN – Haryana Job ID: A2943431 Show more Show less
Posted 3 days ago
5.0 - 8.0 years
10 - 18 Lacs
Gurugram, Bengaluru
Hybrid
Role & responsibilities Effectively utilize multiple software development and deployment methodologies e.g., TDD, DDD, along with secure coding practices, development best practices and Agile/DevOps/DevSecOps principles I identify continuous improvement opportunities in the existing solutions and drive implementation. I act as a code reviewer and apply best practices for optimal design solutions. I act as a first point of contact and resolve technical issues/impediments for SDE I & II. Basic engineering, system administration / provisioning, software development (programming), support, testing and system infra provisioning foundation. Basic understanding and ability to utilize multiple software development methodologies e.g., Test Driven Development (TDD), Domain Driven Development (DDD), along with secure coding practices, development best practices and Agile/ DevOps principles Preferred candidate profile A Bachelors or masters degree in Computer Science or related field. A minimum of 5 to 8 years of working experience with DevOps Consulting, assessment, implementing CI/CD pipeline and handling end to end DevOps activities in a project. Good experience with CI servers like Jenkins, Artifactory, SonarQube and others - and their application to create CI/CD pipelines Good knowledge around scripting languages like Groovy, Bash and Powershell. Proficiency in writing Ansible/Chef playbooks and Ansible Tower . Should have worked on one of the private clouds like VMware or OpenStack . Hands-on experience and expertise on working with containerization tools like Docker . Hands-on experience and expertise in container orchestration tools like Kubernetes and its eco-system like Rancher , OpenShift , etc. Good knowledge with Terraform . Hands on with Automated Environment Provisioning ( AEP ) Good knowledge of Build Tools like Maven, ANT, Gradle Good knowledge about the WebSphere Liberty servers. Extensive knowledge about microservices and their pipelines. Proficiency in dealing with Java based applications with Maven/Gradle. Good knowledge about cloud platforms mainly AWS like EC2, VPC, Route53, load balancer etc. Good to have knowledge about node/react based CI/CD pipeline. Experience in designing automated CI/CD pipelines for new software projects, right from project kickoff to production deployment and maintenance. Experience and deep understanding of the DevSecOps ecosystem. Experience in Burp suite & TFS Experience with Agile, Jira & Service Now
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Code Outputs Expected: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure Define and govern configuration management plan Ensure compliance from the team Test Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project Manage delivery of modules and/or manage user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort estimation for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface With Customer Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications Take relevant domain/technology certification Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments Python Developer: 5+ years of work experience using Python and AWS for developing enterprise software applications Experience in Apache Kafka, including topic creation, message optimization, and efficient message processing Skilled in Docker and container orchestration tools such as Amazon EKS or ECS Proven experience designing and developing microservices and RESTful APIs using Spring Boot Strong experience managing AWS components, including Lambda (Java), API Gateway, RDS, EC2, CloudWatch Experience working in an automated DevOps environment, using tools like Jenkins, SonarQube, Nexus, and Terraform for deployments Hands-on experience with Java-based web services, RESTful approaches, ORM technologies, and SQL procedures in Java. Experience with Git for code versioning and commit management Experience working in Agile teams with a strong focus on collaboration and iterative development Ability to implement changes following standard turnover procedures, with a CI/CD focus Bachelors or Masters degree in computer science, Information Systems or equivalent Skills Python,AWS,ECS Show more Show less
Posted 3 days ago
12.0 - 20.0 years
35 - 40 Lacs
Navi Mumbai
Work from Office
Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.
Posted 3 days ago
5.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Job Description Senior Site Reliability Engineer, Pune At NielsenIQ Digital Shelf , we help the world’s leading brands measure and improve their online performance. Formerly known as Data Impact, we've recently joined NielsenIQ . Today, we operate at the intersection of scale and agility — a tech-driven environment backed by a global organization. Our Infrastructure team plays a critical, cross-functional role: we build and operate the core platforms that power our applications, ensure reliability, security, and efficiency, and empower development teams to move faster and safer. As a Senior Site Reliability Engineer , you’ll help drive our infrastructure forward — designing resilient systems, optimizing performance, and automating at scale. You’ll be part of a team that takes pride in owning foundational services used across the company. Responsibilities: Design, maintain, and evolve our infrastructure, primarily on Google Cloud Platform, with components on AWS and OVH (bare metal) Improve and standardize our CI/CD pipelines, monitoring, and observability stack Develop and maintain our Infrastructure as Code using Terraform, Ansible, and custom Bash/Python scripts Manage identity and access (IAM), SSO, and contribute to a security-first infrastructure posture Automate infrastructure provisioning and environment management for dev and production teams Define and monitor SLOs, lead post-mortems, and foster a culture of continuous improvement Mentor other team members and help spread SRE best practices across the organization Qualifications 5+years of experience in SRE, DevOps, or Infrastructure Engineering roles in cloud-based environments Strong hands-on experience with GCP and/or AWS, Terraform, Ansible, and infrastructure automation Solid grasp of SRE principles: reliability, availability, incident response, automation Comfortable working with hybrid infrastructure (cloud + dedicated hardware) Familiar with infrastructure security, access management, and compliance best practices A collaborative mindset, technical leadership, and a drive to elevate engineering practices Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less
Posted 3 days ago
14.0 - 20.0 years
50 - 70 Lacs
Bengaluru
Hybrid
Overview : As an SRE manager, you are responsible for the availability and reliability of Calixs cloud. At Calix, Site Reliability Engineering combines software and systems engineering to build and run large-scale, distributed, fault-tolerant systems. You would be responsible for leading a team of Site Reliability Engineers, overseeing the reliability, scalability, and maintainability of Calix's critical infrastructure, including building and maintaining automation tools, managing on-call rotations, collaborating with development teams, and ensuring systems meet service level objectives (SLOs), all while prioritizing continuous improvement and a strong focus on infrastructure health and stability within the Calix platform, leveraging tools like Terraform, observability frameworks from the Grafana Labs ecosystem, and Google Cloud Platform. Qualifications : - Strong experience as an SRE manager with a proven track record of managing large-scale, highly available systems. - Expertise in cloud computing platforms (preferably Google Cloud Platform). - Knowledge of core operating system principles, networking fundamentals, and systems management. - Programming skills in languages like Python and Go. - Proven experience building and leading SRE teams, including hiring, coaching, and performance management. - Deep understanding and expertise in building and maintaining scalable open-source monitoring tools and backend storage. - Experience with incident management processes and best practices. - Excellent communication and collaboration skills to work with cross-functional teams. - Knowledge of SRE principles, including error budgets, fault analysis, and reliability engineering concepts. Education : - B.S. or M.S. in Computer Science or equivalent field. Role & responsibilities
Posted 3 days ago
2.0 - 7.0 years
13 - 17 Lacs
Mumbai
Work from Office
About The Role "¢Play a key role in meeting the business objectives through timely implementations of software products in the Investments and Wealth "¢Understanding a given technology product architecture & design and work on continuous improvement "¢Carry out deployments ensuring proper release / version control "¢Identify & build test scenarios for carrying out System Integration Testing (SIT) before releasing system for Functional Testing "¢Support testing activities by setting up data and L1 analysis of reported issues "¢Manage application in DR drills "¢Liaise with software solution provider / vendor for raising issues and tracking them to closure "¢Interact with various internal teams in Kotak to get the required resources Job Requirement: "¢Education BackgroundEngineering Graduate "¢Should have worked on IT projects in the area of Investments / Wealth Management / Capital Markets "¢Should possess good communication skills "¢Knowledge of Microservices, EKS / Kubernetes will be an added advantage
Posted 3 days ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Job Title: Lead DevOps Engineer Location: India Job Summary The Lead DevOps Engineer on the Cloud Engineering team will be responsible for designing, automating, and optimizing cloud infrastructure and deployment pipelines to ensure scalability, security, and operational efficiency. This role requires a strong background in infrastructure as code, CI/CD automation, cloud architecture, and system reliability. You will collaborate closely with engineering, security, and operations teams to drive innovation, improve platform resilience, and streamline software delivery. As a key technical leader, you will influence best practices, mentor engineers, and contribute to the continuous evolution of the companys cloud engineering strategy. The ideal candidate has experience managing cloud-based environments, automating infrastructure provisioning, optimizing deployment workflows, and enhancing system performance. This is an opportunity to work in a fast-paced environment where ownership, accountability, and problem-solving are highly valued. Key Responsibilities: Build and optimize cloud infrastructure to support scalable and high-performance applications. Develop and refine CI/CD pipelines to enhance software delivery efficiency and reliability. Automate infrastructure provisioning and configuration management to reduce manual effort and improve consistency. Troubleshoot and resolve complex system issues, ensuring high availability and resilience. Collaborate with engineering, security, and operations teams to implement best practices for deployment, monitoring, and compliance. Drive continuous improvement initiatives, adopting new tools and methodologies to enhance cloud engineering capabilities. Provide mentorship and technical leadership to support team growth and foster knowledge sharing. Qualifications & Skills: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 6+ years of experience in DevOps, Cloud Engineering, or a similar role. Strong expertise in cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Ansible. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Strong knowledge of CI/CD tools such as Jenkins, GitLab CI/CD, CircleCI, or ArgoCD. Experience in scripting and automation using Python, Bash, or PowerShell. Expertise in monitoring and logging solutions (e.g., Prometheus, Grafana, ELK Stack, Datadog). Understanding of security best practices for cloud environments, including identity management, network security, and compliance frameworks. Strong problem-solving skills with a proactive and analytical mindset. Excellent communication and collaboration skills, with the ability to work in a team-oriented environment. Preferred Qualifications: Certifications in cloud platforms in any of the following (AWS Certified DevOps Engineer, Azure DevOps Engineer, Google Professional DevOps Engineer). AWS Solution Architects are preferred Experience working in a microservices architecture and service mesh technologies. Familiarity with serverless computing frameworks. Experience leading and mentoring DevOps teams. Why Join Us? Work in a dynamic, innovative, and collaborative environment. Opportunity to lead and shape cloud engineering strategies. Competitive salary and benefits package. Professional development and career growth opportunities. If you are passionate about DevOps, cloud engineering, and automation, and you are looking to make a meaningful impact, we invite you to apply for this exciting opportunity!
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Quality Automation Specialist In this key role, you’ll be undertaking and enabling automated testing activities in all delivery models We’ll look to you to support teams to develop quality solutions and enable continuous integration and assurance of defect free deployment of customer value You’ll be working closely with feature teams and a variety of stakeholders, giving you great exposure to professional development opportunities We're offering this role at associate vice president level What you'll do Joining us in a highly collaborative role, you’ll be contributing to the transformation of testing using quality processes, tools, and methodologies, significantly improving control, accuracy and integrity. You’ll be making sure repeatable, constant and consistent quality is built into all phases of the idea to value lifecycle at reduced cost or reduced time to market. It’s a chance to work with colleagues at multiple levels, and with cross-domain, domain, platform and feature teams, to build in quality as an integral part of all activities. Additionally, You’ll Be Supporting the design of automation test strategies, modify and maintain scripts aligned to business or programme goals Evolving more predictive and intelligent testing approaches, based on automation and innovative testing products and solutions Collaborating with stakeholders and feature teams and making sure that automated testing is performed and monitored as an essential part of the planning and product delivery Designing and creating a low maintenance suite of stable, re-usable automated tests, which are usable both within the product or domain and across domains and systems in an end-to-end capacity Applying testing and delivery standards by understanding the product development lifecycle along with mandatory, regulatory and compliance requirements The skills you'll need We’re looking for someone with experience of automated testing, particularly from an Agile development or CI/CD environment. You’ll be an innovative thinker who can identify opportunities and design solutions, coupled with the ability to develop complex automation code. You’ll have a good understanding of Agile methodologies with experience of working in an Agile team, with the ability to relate everyday work to the strategic vision of the feature team with a strong focus on business outcomes. We’ll Also Look For You To Have At least eight years of experience in in end-to-end and automation testing using Selenium with Java for UI Skilled in Rest Assured library and able to develop BDD Cucumber scripts Excellent skills in building CI/CD pipeline well versed with Terraform, AWS and GitLab Excellent communication skills with the ability to communicate complex technical concepts to management level colleagues Good collaboration and stakeholder management skills Show more Show less
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
About Beyond Key We are a Microsoft Gold Partner and a Great Place to Work-certified company. "Happy Team Members, Happy Clients" is a principle we hold dear. We are an international IT consulting and software services firm committed to providing. Cutting-edge services and products that satisfy our clients' global needs. Our company was established in 2005, and since then we've expanded our team by including more than 350+ Talented skilled software professionals. Our clients come from the United States, Canada, Europe, Australia, the Middle East, and India, and we create and design IT solutions for them. If you need any more details, you can get them at https://www.beyondkey.com/about. Job Summary We are seeking a skilled Cloud Engineer with extensive experience in Microsoft Azure to design, implement, and manage secure, scalable, and highly available cloud infrastructure. The ideal candidate will have expertise in Azure networking, security, Terraform, and cloud migration. The role involves working with Azure Virtual Networks, VPN Gateway, ExpressRoute, Application Gateway, and Azure Firewall to build advanced networking solutions. The Cloud Engineer will also be responsible for automating infrastructure deployment using Terraform, ensuring compliance with Azure security best practices, and migrating applications and databases to the cloud. Additionally, experience with backup and disaster recovery configurations, as well as Azure DevOps, is highly preferred. The position requires strong problem-solving skills, the ability to collaborate effectively with cross-functional teams, and the ability to work independently in a fast-paced environment. Skills Required: Microsoft Azure, Azure Network, Azure Firewall, Application Gateway , Azure Kubernetes Services (Optional) , Azure PaaS and IaaS , RBAC , Terraform , Basic understanding of Azure DevOps . Configuration of Azure Databases like Azure SQL, MySQL Must Have Skills : Terraform , Azure Cli , Azure Migration (Server and Applications) , Networking , Hybrid Cloud , Landing Zone, and Azure Policy Key Responsibilities Experience in complex network architectures Hub-Spoke, V-WAN . implementing advanced Azure networking solutions like Azure Virtual Network Peering, Azure VPN Gateway ExpressRoute, Application Gateway, and Azure Firewall. Design, deploy, and manage secure and scalable cloud infrastructure on Microsoft Azure, adhering to industry best practices and compliance regulations. Configure robust security solutions on Azure, including Azure Active Directory, Azure Security Center, Azure Key Vault, and role-based access control (RBAC). Implement Infrastructure as Code (IaC) best practices using Terraform to automate infrastructure provisioning and configuration management. Develop and maintain Terraform scripts for deploying and managing Azure resources in a consistent and repeatable manner. Troubleshoot and resolve complex Azure networking and security issues, ensuring optimal performance and availability. Experience in Backup and DR Configurations. Experience in Azure Migrate. Experience in Application and Database Migration. Qualifications 5+ years of experience in cloud engineering, with a strong focus on Microsoft Azure. Proven experience in designing, implementing, and managing secure and scalable Azure networking solutions. In-depth knowledge of Azure networking services, including Virtual Networks, Subnets, Network Security Groups (NSGs), Azure VPN, and ExpressRoute. Solid understanding of Azure security best practices, including Azure Active Directory, Azure Key Vault, and RBAC. Extensive experience with Terraform principles and tools. Proven ability to write clean, maintainable, and well-documented Terraform scripts. Nice to have Knowledge on Micro service Architectures (Docker, Kubernetes etc. ) Experience with scripting languages (e.g., PowerShell, Bash) is a plus. Strong understanding of cloud security principles and practices. Excellent communication, collaboration, and problem-solving skills. Ability to work independently and as part of a team. Share with someone awesome View all job openings Show more Show less
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less
Posted 3 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Foxit Foxit is a global software company reshaping how the world interacts with documents. With over 700 million users worldwide, we offer cutting-edge PDF, collaboration, and e-signature solutions across desktop, mobile, and cloud platforms. As we expand our SaaS and cloud-native capabilities, we're seeking a technical leader who thrives in distributed environments and can bridge the gap between development and operations at global scale. Role Overview As a Senior Development Support Engineer , you will serve as a key technical liaison between Foxit’s global production environments and our China-based development teams. Your mission is to ensure seamless cross-border collaboration by investigating complex issues, facilitating secure and compliant debugging workflows, and enabling efficient delivery through modern DevOps and cloud infrastructure practices. This is a hands-on, hybrid role requiring deep expertise in application development, cloud operations, and diagnostic tooling. You'll work across production environments to maintain business continuity, support rapid issue resolution, and empower teams working under data access and sovereignty constraints. Key Responsibilities Cross-Border Development Support Investigate complex, high-priority production issues inaccessible to China-based developers. Build sanitized diagnostic packages and test environments to enable effective offshore debugging. Lead root cause analysis for customer-impacting issues across our Java and PHP-based application stack. Document recurring patterns and technical solutions to improve incident response efficiency. Partner closely with China-based developers to maintain architectural alignment and system understanding. Cloud Infrastructure & DevOps Manage containerized workloads (Docker/Kubernetes) in AWS and Azure; optimize performance and cost. Support deployment strategies (blue-green, canary, rolling) and troubleshoot CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Implement and manage Infrastructure as Code using Terraform (multi-cloud), with CloudFormation or ARM Templates as a plus. Support observability through tools like New Relic, CloudWatch, Azure Monitor, and log aggregation systems. Automate environment provisioning, monitoring, and diagnostics using Python, Bash, and PowerShell. Collaboration & Communication Translate production symptoms into actionable debugging tasks for teams without access to global environments. Work closely with database, QA, and SRE teams to resolve infrastructure or architectural issues. Ensure alignment with global data compliance policies (SOC2, NSD-104, GDPR) when sharing data across borders. Communicate technical issues and resolutions clearly to both technical and non-technical stakeholders. Qualifications Technical Skills Languages: Advanced in Java and PHP (Spring Boot, YII); familiarity with JavaScript a plus. Architecture: Experience designing and optimizing backend microservices and APIs. Cloud Platforms: Hands-on with AWS (EC2, Lambda, RDS) and Azure (VMs, Functions, SQL DB). Containerization: Docker & Kubernetes (EKS/AKS); Helm experience a plus. IaC & Automation: Proficient in Terraform; scripting with Python/Bash. DevOps: Familiar with modern CI/CD pipelines; automated testing (Cypress, Playwright). Databases & Messaging: MySQL, MongoDB, Redis, RabbitMQ. Professional Experience Minimum 6+ years of full-stack or backend development experience in high-concurrency systems. Strong understanding of system design, cloud infrastructure, and global software deployment practices. Experience working in global, distributed engineering teams with data privacy or access restrictions. Preferred Exposure to compliance frameworks (SOC 2, GDPR, NSD-104, ISO 27001, HIPAA). Familiarity with cloud networking, CDN configuration, and cost optimization strategies. Tools experience with Postman, REST Assured, or security testing frameworks. Language: Fluency in English; Mandarin Chinese is a strong plus. Why Foxit? Work at the intersection of development and operations on a global scale. Be a trusted technical enabler for distributed teams facing real-world constraints. Join a high-impact team modernizing cloud infrastructure for enterprise-grade document solutions. Competitive compensation, professional development programs, and a collaborative culture. Show more Show less
Posted 3 days ago
12.0 - 20.0 years
35 - 40 Lacs
Navi Mumbai
Work from Office
Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.
Posted 3 days ago
4.0 - 6.0 years
18 - 22 Lacs
Bengaluru
Work from Office
We're hiring a DevOps Engineer 2 to build and manage CI/CD pipelines, automate infrastructure (IaC), monitor systems, and ensure high availability. Must have hands-on experience with AWS, Docker, Kubernetes, Git, and scripting
Posted 3 days ago
10.0 - 20.0 years
35 - 40 Lacs
Navi Mumbai
Work from Office
Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.
Posted 3 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview * Core Technology Infrastructure (CTI), part of the Global Technology & Operations organization, consists of more than 6,600 employees worldwide. With a presence in more than 35 countries, TI designs, builds and operates end-to-end technology infrastructure solutions and manages critical systems and platforms across the bank. TI delivers industry-leading infrastructure products and services to the company’s employees, customers and clients around jn87uthe world. Job Description* Terraform Software Developer – Candidate would be responsible for development for automation tools focused on Terraform Enterprise. Experience should include Terraform development and administration (back end of platform), System Administration (primarily Linux), integration with other automation tools like Horizon, Ansible Platform and GitHub. Understanding of SDLC processes and tools. Experience with cloud infrastructure as code, API’s, YAML, HCL, Python. Role also requires operational experience with monitoring of systems, incident, and problem management. Responsibilities * Experience on using Terraform. Review bitbucket feature files, branching strategy, maintain bitbucket branches. Evaluate services of Azure & AWS and use Terraform to develop modules. Improve and optimize deployment challenges and help in delivering reliable solution. Interact with technical leads and architects to discover solutions that help solve challenges faced by Product Engineering teams. Be part of an enriching team and solve real Production engineering challenges. Improve knowledge in the areas of DevOps & Cloud Engineering by using enterprise tools and contributing to projects success. Programming or scripting skills in Python/Powershell. Any related Certification on cloud is nice to have. Ensure that all system deliverables meet quality objectives in functionality, performance, stability, security accessibility, and data quality. Provide work breakdown and estimates for tasks on agreed scope and development milestones to meet overall project timelines. Experience with the Agile/Scrum methodology. Strong verbal and written communication skills. Highly detailed oriented. Self-motivated, with the ability to work independently and as part of a team. Strong willingness & comfort taking on and challenging development approaches. Strong analytical and communication skills, ability to effectively work with both technical and non-technical resources. Must have strong debugging and troubleshooting skills. Able to implement and maintain Continuous Integration/Delivery (CI/CD) pipelines for the services. Able to implement and maintain automation required to improve code logistics from development to production. Assisting the team in instrumenting code for system availability. Maintaining and upgrading the deployment platforms as well as system infrastructure with Infrastructure-as-Code tools. Performing system administration and adhoc duties. Requirements: Education* B.E. / B.Tech / M.E. / M.Tech / MCA Experience Range* 8+ years Foundational Skills* Terraform development experience Terraform Enterprise Administration/Operations GO Language Java or Dotnet programming knowledge Python or shell scripting Database query development experience Desired Skills* AWS Change Management Horizon Tools (Ansible, Jira, Confluence, BitBucket) CI/CD Tools (GitHub, Jenkins, Artifactory) GCP JIRA Agile Methodology Python Powershell HashiCorp Configuration Language (HCL) Infrastructure as Code (IaC) Cloud Integration (Azure, AWS, GCP) Linux Administration Site Reliability Engineering Work Timings* 10.30AM to 7.30 PM Job Location* Chennai, Hyderabad, Mumbai Show more Show less
Posted 3 days ago
2.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Site Reliability Engineering (SRE) at Equifax is a discipline that combines software and systems engineering for building and running large-scale, distributed, fault-tolerant systems. SRE ensures that internal and external services meet or exceed reliability and performance expectations while adhering to Equifax engineering principles. SRE is also an engineering approach to building and running production systems – we engineer solutions to operational problems. Our SREs are responsible for overall system operation and we use a breadth of tools and approaches to solve a broad set of problems. Practices such as limiting time spent on operational work, blameless postmortems, proactive identification, and prevention of potential outages. Our SRE culture of diversity, intellectual curiosity, problem solving and openness is key to its success. Equifax brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big, and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn, grow and take pride in our work. What You’ll Do Work in a DevSecOps environment responsible for the building and running of large-scale, massively distributed, fault-tolerant systems. Work closely with development and operations teams to build highly available, cost effective systems with extremely high uptime metrics. Work with cloud operations team to resolve trouble tickets, develop and run scripts, and troubleshoot Create new tools and scripts designed for auto-remediation of incidents and establishing end-to-end monitoring and alerting on all critical aspects Build infrastructure as code (IAC) patterns that meets security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Participate in a team of first responders in a 24/7, follow the sun operating model for incident and problem management. What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in software engineering, systems administration, database administration, and networking. 1+ years of experience developing and/or administering software in public cloud Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives. Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Demonstrable cross-functional knowledge with systems, storage, networking, security and databases System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: DevSecOps - Uses knowledge of DevSecOps operational practices and applies engineering skills to improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies programs/scripts and integrated software services. Applies agreed SRE standards and tools to achieve a well-engineered result. Operational Excellence - Prioritizes and organizes one’s own work. Monitors and measures systems against key metrics to ensure availability of systems. Identifies new ways of working to make processes run smoother and faster. Systems Thinking - Uses knowledge of best practices and how systems integrate with others to improve their own work. Understand technology trends and use knowledge to identify factors that achieve the defined expectations of systems availability. Technical Communication/Presentation - Explains technical information and the impacts to stakeholders and articulates the case for action. Demonstrates strong written and verbal communication skills. Troubleshooting - Applies a methodical approach to routine issue definition and resolution. Monitors actions to investigate and resolve problems in systems, processes and services. Determines problem fixes/remedies. Assists with the implementation of agreed remedies and preventative measures. Analyzes patterns and trends. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms. Show more Show less
Posted 3 days ago
4.0 - 9.0 years
7 - 17 Lacs
Noida, Gurugram
Work from Office
3–6 yrs DevOps exp in software engineering Skilled in CI/CD tools: Jenkins, Bamboo, Bitbucket Pipelines Strong in Windows OS architecture and VMware Proficient with Git, Bitbucket, AWS/Azure Scrip: PowerShell, Python, Shell Groovy scripting is a plus
Posted 3 days ago
4.0 - 6.0 years
27 - 42 Lacs
Chennai
Work from Office
Skill – Aks , Istio service mesh ,CICD Shift timing - Afternoon Shift Location - Chennai, Kolkata, Bangalore Excellent AKS, GKE or Kubernetes admin experience. Good troubleshooting experience on istio service mesh, connectivity issues. Experience with Github Actions or similar ci/cd tool to build pipelines.Working experience on any cloud, preferably Azure, Google with good networking knowledge. Experience on python or shell scripting. Experience on building dashboards, configure alerts using prometheus and Grafana.
Posted 3 days ago
0.0 - 1.0 years
0 Lacs
Ahmedabad
Work from Office
Job Title: DevOps Intern Location: Ahmedabad (Work from Office) Duration: 3 to 6 Months Start Date: Immediate or As per Availability Company: FX31 Labs Role Overview: We are looking for a motivated and detail-oriented DevOps Intern to join our engineering team. As a DevOps Intern, you will assist in designing, implementing, and maintaining CI/CD pipelines, automating workflows, and supporting infrastructure deployments across development and production environments. Key Responsibilities: Assist in building and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Help in provisioning and managing cloud infrastructure (AWS, Azure, or GCP). Collaborate with developers to automate software deployment processes. Monitor and optimize system performance, availability, and reliability. Write basic scripts to automate repetitive DevOps tasks. Document internal processes, tools, and workflows. Support containerization (Docker) and orchestration (Kubernetes) initiatives. Required Skills: Basic understanding of Linux/Unix systems and shell scripting. Familiarity with version control systems like Git. Knowledge of DevOps concepts like CI/CD, Infrastructure as Code (IaC), and automation. Exposure to tools like Docker, Jenkins, Kubernetes (even theoretical understanding is a plus). Awareness of at least one cloud platform (AWS, Azure, or GCP). Strong problem-solving attitude and willingness to learn. Good to Have: Hands-on project or academic experience related to DevOps. Knowledge of Infrastructure as Code tools like Terraform or Ansible. Familiarity with monitoring tools (Grafana, Prometheus) or logging tools (ELK, Fluentd). Eligibility Criteria: Pursuing or recently completed a degree in Computer Science, IT, or related field. Available to work full-time from the Ahmedabad office for the duration of the internship. Perks: Certificate of Internship & Letter of Recommendation (on successful completion). Opportunity to work on real-time projects with mentorship. PPO opportunity for high-performing candidates. Hands-on exposure to industry-level DevOps tools and cloud platforms. About FX31 Labs: FX31 Labs is a fast-growing tech company focused on building innovative solutions in AI, data engineering, and product development. We foster a learning-rich environment and aim to empower individuals through hands-on experience in real-world projects.
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Banyan Software provides the best permanent home for successful enterprise software companies, their employees, and customers. We are on a mission to acquire, build and grow great enterprise software businesses all over the world that have dominant positions in niche vertical markets. In recent years, Banyan was named the #1 fastest-growing private software company in the US on the Inc. 5000 and amongst the top 10 fastest-growing companies by the Deloitte Technology Fast 500. Founded in 2016 with a permanent capital base setup to preserve the legacy of founders, Banyan focuses on a buy and hold for life strategy for growing software companies that serve specialized vertical markets. About Campus Café Our student information system is an integrated SIS that manages the entire student life cycle including Admissions, Student Services, Business Office, Financial Aid, Alumni Development and Career Tracking functions. Our SIS is a single database student information system that allows clients to manage marketing, recruitment, applications, course registration, billing, transcripts, financial aid, career tracking, alumni development, fundraising, student attendance and class rosters. It allows real-time access to data that is more accurate and available when our users it. Our SaaS model means clients don’t need to build and maintain an expensive and complex IT infrastructure. Our APIs and custom integrations will keep all their data in sync and accessible in real-time. Since the database is fully integrated, everything is updated in real-time and there’s no waiting for information. Position Overview We are looking for a versatile System Administrator / DevOps Engineer to support and enhance our Azure-hosted infrastructure, running Java applications on Tomcat, backed by Microsoft SQL Server on Windows servers. The ideal candidate will have a solid background in Windows system administration, hands-on experience with Azure services, and a DevOps mindset focused on automation, reliability, and performance. Key Responsibilities Manage and maintain Windows Server environments hosted in Azure. Support the deployment, configuration, and monitoring of Java applications running on Apache Tomcat. Administer Microsoft SQL Server, including performance tuning, backups, and availability in Azure. Automate infrastructure tasks, such as java and tomcat upgrades, using PowerShell, Azure CLI, or Azure Automation. Build and maintain CI/CD pipelines for Java-based applications using tools such as Jenkins, or GitHub Actions. Manage/monitor Azure resources: Virtual Machines, Azure SQL, App Services, Azure Monitor, and Networking (App Gateway, Firewall, VNets, NSGs, VPN). Implement and monitor backup, recovery, and security policies within the Azure environment. Collaborate with development and operations teams to optimize deployment strategies and system performance. Troubleshoot issues across systems, applications, and cloud services. Required Skills & Experience 3+ years of experience in system administration or DevOps, with a focus on Windows environments. Experience deploying and managing Java applications on Tomcat. Strong knowledge of Microsoft SQL Server (on-prem and/or Azure-hosted). Solid experience with Azure IaaS and PaaS services (e.g., Azure VMs, Azure SQL, Azure Monitor, Azure Storage). Proficiency in scripting and automation (PowerShell, Azure CLI, or similar). Familiarity with CI/CD tools such as Azure DevOps, Jenkins, or GitHub Actions. Understanding of networking, security groups, and VPNs in a cloud context. Preferred Skills Experience with Azure Infrastructure as Code (e.g., ARM templates, Bicep, or Terraform). Familiarity with Azure Active Directory, RBAC, and Identity & Access Management. Experience with containerization (Docker) and/or orchestration (AKS) is a plus. Microsoft Azure certifications (AZ-104, AZ-400) or equivalent experience. Diversity, Equity, Inclusion & Equal Employment Opportunity at Banyan: Banyan affirms that inequality is detrimental to our Global Teams, associates, our Operating Companies, and the communities we serve. As a collective, our goal is to impact lasting change through our actions. Together, we unite for equality and equity. Banyan is committed to equal employment opportunities regardless of any protected characteristic, including race, color, genetic information, creed, national origin, religion, sex, affectional or sexual orientation, gender identity or expression, lawful alien status, ancestry, age, marital status, or protected veteran status and will not discriminate against anyone on the basis of a disability. We support an inclusive workplace where associates excel based on personal merit, qualifications, experience, ability, and job performance. Show more Show less
Posted 3 days ago
3.0 - 5.0 years
2 - 6 Lacs
Chennai
Work from Office
Job Description: We are seeking a highly motivated and skilled DevOps Engineer to join our Infrastructure team. The ideal candidate will have a solid background in DevOps practices, automation, CI/CD, and cloud technologies, along with hands-on experience in managing both on-premises and cloud environments. This role is crucial to support and optimize our infrastructure, monitoring, and deployment processes across development and production systems. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using tools such as Jenkins or GitLab CI. Develop automation scripts using Python, Bash, or similar scripting languages. Manage infrastructure as code using tools like Terraform and Ansible. Deploy, monitor, and maintain on-premise DevOps solutions, including Zabbix, log management tools, and internal services. Ensure uptime and performance of systems through proactive monitoring and incident management practices. Administer and support containerization platforms like Docker and Openshift, Kubernetes. Collaborate with development and QA teams to streamline deployment and release processes. Maintain and monitor cloud environments (AWS, Azure, GCP) and ensure cost-effective, secure, and scalable infrastructure. Participate in root cause analysis, implement corrective actions, and document incidents and fixes. Ensure systems comply with internal security standards and external regulations. Required Qualifications: Education: Bachelors or Masters degree (Bac+5 or higher) in Computer Science, Information Technology, or a related field. Experience: 3 to 4 years of hands-on experience in DevOps engineering and infrastructure automation. Technical Skills: Strong scripting experience in Python, Bash, or similar. Proficiency with CI/CD tools: Jenkins, GitLab CI, etc. Solid experience with Terraform, Ansible, and infrastructure-as-code best practices. In-depth understanding of cloud platforms (AWS, Azure, GCP). Good knowledge of containerization and orchestration tools (Docker, Kubernetes). Experience in monitoring and logging tools: Zabbix, Prometheus, Grafana, ELK/EFK stacks, etc. Familiarity with incident management workflows and log aggregation tools. Strong troubleshooting and problem-solving skills related to network, server, and application issues. Preferred (Good to Have): Cloud certifications (AWS Certified Solutions Architect, Azure Administrator, GCP Associate Engineer, etc.) Understanding of network security and compliance frameworks. Experience working in Agile environments and participating in sprint ceremonies. Proficiency in technical English for documentation and collaboration. Interested can reach us at careers.tag@techaffinity.com
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.