Jobs
Interviews

17543 Terraform Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job hiring for LXME (Laxmi) Job Title: Junior Backend Developer Company: LXME Location: Mumbai (Onsite Role) Experience: 1-3 years Must have - a. 1-3 years of work experience b. Strong background with Java or GoLang Tech Stack Languages: Golang, Java Cloud: AWS (S3, EC2, ECS, RDS, Lambda, API Gateway, SQS, SNS) Databases: PostgreSQL, Redis DevOps & Amp; Infra: Docker, ECS, Terraform, Bitbucket Pipelines Monitoring Tools: New Relic, AWS CloudWatch Responsibilities: Design, build, and maintain scalable and high-performing backend services using GoLang or Java Drive end-to-end architecture, design, and implementation of backend systems Champion clean code practices, robust system design, and performance optimization Collaborate closely with cross-functional teams including Product, QA, and DevOps Set up and manage CI/CD pipelines, infrastructure-as-code, and deployment workflows Monitor and enhance application performance, system reliability, and latency Implement comprehensive API and infrastructure monitoring, alerting, and logging Work with both SQL and NoSQL databases to optimize data storage and access Influence and shape engineering best practices, standards, and team processes Requirements: 1-3 years of hands-on backend development experience using Golang or Java Deep understanding of RESTful APIs, system design, and microservices architecture Experience with AWS, GCP, or Azure cloud services and container-based deployments Experience with CI/CD tools, Git workflows, and infrastructure automation Willing to learn from senior engineers and mindset of taking feedback and work towards continuous improvement Experience/Knowledge of database design, query tuning, and caching strategies A mindset focused on automation, efficiency, and scalability Proven ability of debugging and performance tuning skills Excellent written and verbal communication skills and strong documentation habits Nice to Have: Background in fintech, payments, or investment platforms Experience in the field of advanced concurrency and performance optimizations Familiarity with event-driven architectures and message brokers (Kafka, RabbitMQ) Knowledge of security best practices in backend development

Posted 4 days ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Greetings from Cygnet One! Please find the job description as below: Designation: Senior / Lead - Azure DevOps Experience: 6+ Years Work Location: Ahmedabad Work Mode: Work from Office Job Description: Design, implement, and manage scalable, secure, and resilient cloud infrastructure on Microsoft Azure. Develop and maintain infrastructure as code using Terraform for repeatable and automated provisioning. Manage and optimize Kubernetes clusters on Azure Kubernetes Service (AKS) including scaling, monitoring, and troubleshooting. Deploy and manage Helm charts for application packaging and release management. Implement and manage Git-based CI/CD pipelines to automate build, test, and deployment workflows. Utilize ArgoCD for GitOps-based deployment and configuration management. Skill Set: Minimum 7 years of experience in DevOps or related roles in enterprise-scale environments. Deep expertise with Azure Cloud Services, including VNETs, NSGs, Load Balancers, Application Gateways, AKS, Key Vaults, etc. Proficient with Terraform for infrastructure provisioning and automation. Strong experience with AKS, including Helm for chart management and lifecycle. Hands-on experience with ArgoCD, GitOps workflows, and continuous deployment strategies. Proficient in Git, YAML, scripting (Bash, PowerShell, or Python). Solid understanding of networking concepts including DNS, TCP/IP, VPNs, subnets, routing, and firewalls. Strong troubleshooting and problem-solving skills in production environments. Experience working with high-availability and distributed systems at scale. Familiarity with observability tools (e.g., Prometheus, Grafana, Azure Monitor, etc.) Monitor system performance and proactively troubleshoot issues across networking, infrastructure, and application layers. Collaborate with software engineering, QA, and architecture teams to align infrastructure solutions with business needs. Advocate for DevOps best practices and mentor junior engineers in the team. Ensure high availability, disaster recovery, and security compliance of critical infrastructure.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role-Onsite Opportunity at Abu Dhabi We’re looking for an experienced Cloud Architect to lead the design, implementation, and optimization of scalable, secure, and resilient cloud solutions. You’ll play a critical role in assessing our current systems, shaping cloud strategies, managing migrations, and ensuring cloud environments are cost-effective, compliant, and high-performing. Key Responsibilities Assess infrastructure, applications, and business needs to define optimal cloud architecture. Design scalable and reliable cloud-native solutions using AWS, Azure, or GCP. Lead end-to-end cloud migration projects, from planning to execution and optimization. Implement robust security controls (IAM, encryption, MFA, IDS/IPS) and ensure compliance (e.g., GDPR, HIPAA). Develop integration strategies for hybrid cloud environments, including SSO, APIs, and data pipelines. Set up monitoring frameworks, analyze performance data, and optimize resource usage. Design and test disaster recovery and business continuity plans. Collaborate with stakeholders to align cloud initiatives with business goals. Document architecture and provide mentorship to junior team members. Participate in agile ceremonies and contribute to cloud governance and cost management. Qualifications 5+ years in cloud architecture or infrastructure roles. Strong expertise in AWS, Azure, or GCP. Experience with IaC (Terraform, CloudFormation), CI/CD, and containerization (Docker, Kubernetes). Cloud certifications preferred (e.g., AWS Solutions Architect, Azure Expert). Strong communication and stakeholder management skills.

Posted 4 days ago

Apply

6.0 years

0 Lacs

India

Remote

🧑‍💻 Experience: 2–6 Years 🏢 Company: Remotohire 🔍 About Us Remotohire is a remote-first tech talent company that helps global teams build scalable, cloud-native, and AI-powered applications. We work across industries and platforms, pushing the edge of modern web technologies. 💼 Role Overview We’re hiring a Front-End Developer with strong experience in React, TypeScript, monorepo architectures (TurboRepo), and cloud deployment on AWS . You’ll be part of a team building robust, scalable, and beautiful interfaces for modern web applications. 🧠 What You’ll Work On Building high-performance UIs using React v18+ and TypeScript Managing scalable front-end codebases in a monorepo (TurboRepo/Nx) Integrating APIs (REST/GraphQL) with a deep awareness of Prisma model changes Working alongside backend teams to sync shared types and schemas Deploying SPAs via AWS S3/CloudFront/Amplify Ensuring accessibility (a11y), responsiveness, and pixel-perfect designs (Figma to code) Writing unit and E2E tests with Jest, React Testing Library, Cypress/Playwright ✅ You Should Have 2–6 years of experience in React.js with modern patterns (hooks, context, suspense) Strong command of TypeScript (no any, please) Experience with component libraries like MUI, Chakra, or Shadcn Solid understanding of state management (React Query, Redux Toolkit) Knowledge of API consumption , Prisma-awareness, and OpenAPI Exposure to CI/CD pipelines and version control in a monorepo Ability to debug issues in deployment and infrastructure-awareness (DNS, S3, CloudFront) 🌟 Bonus Points Familiarity with Storybook , React Native , or Tailwind CSS Experience with shared TypeScript types across front-end/backend (codegen, tRPC) Ability to read/edit Terraform/CloudFormation A good eye for design and user experience 📬 Ready to Apply? Submit your application at: 🔗 https://www.oxcytech.com/developer-application We look forward to building something exceptional with you!

Posted 4 days ago

Apply

1.0 - 5.0 years

0 Lacs

India

On-site

We are hiring on behalf of a leading Indian unicorn looking for a talented DevOps Engineer with 1-5 years of experience. In this role, you will be the backbone of the engineering team, responsible for building and maintaining a scalable, reliable, and secure cloud infrastructure. If you are passionate about automation, infrastructure as code, and building robust CI/CD pipelines, this is your opportunity to make a massive impact. What You'll Do (Your Responsibilities): Cloud Infrastructure: Design, build, and manage scalable and secure cloud infrastructure on AWS, GCP, or Azure. Automation: Implement and maintain robust CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI to automate testing and deployment processes. Containerization: Utilize containerization and orchestration tools (Docker, Kubernetes) to manage microservices architecture effectively. Infrastructure as Code (IaC): Champion and implement IaC practices using tools like Terraform or CloudFormation to ensure infrastructure is versioned and reproducible. Monitoring & Reliability: Monitor application performance and infrastructure health, ensuring high availability, reliability, and rapid incident response. What We're Looking For (Your Qualifications): Experience: 1-5 years of hands-on experience in a DevOps, SRE, or Cloud Engineering role. Core Skills: Strong experience with at least one major cloud provider ( AWS preferred), Docker , Kubernetes , and CI/CD tools. IaC: Proficiency with Infrastructure as Code tools like Terraform or CloudFormation. Scripting: Strong scripting skills in languages like Bash, Python, or Go. Systems Knowledge: Solid understanding of Linux/Unix administration, networking concepts, and security best practices. Proactive Mindset: A proactive approach to identifying and resolving potential issues before they impact production. Our Unique Application Process: To fast-track your application directly to hiring managers, we use a two-step process: Submit Your Resume: On the Portal. AI-Powered Interview: You will be invited to a short, recorded video interview. This is your chance to showcase your skills and personality beyond your resume.

Posted 4 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary Job title: Azure Cloud Security Engineer (Senior Consultant) About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk Management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and RPA to solve Deloitte’s clients ‘most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Cyber & Strategic Risk We help organizations create a cyber-minded culture, reimagine risk to uncover strategic opportunities, and become faster, more innovative, and more resilient in the face of ever-changing threats. We provide intelligence and acuity that dynamically reframes risk, transcending a manual, reactive paradigm. The cyber risk services—Identity & access management (IAM) practice helps organizations in designing, developing, and implementing industry-leading IAM solutions to protect their information and confidential data, as well as help them build their businesses and supporting technologies to be more secure, vigilant, and resilient. The IAM team delivers service to clients through following key areas: User provisioning Access certification Access management and federation Entitlements management Work you’ll do As a Cloud Security Engineer, you will be at the front lines with our clients supporting them with their Cloud Cyber Risk needs: Executing on cloud security engagements across the lifecycle – assessment, strategy, design, implementation, and operations. Performing technical health checks for cloud platforms/environments prior to broader deployments. Assisting in the selection and tailoring of approaches, methods and tools to support cloud adoption, including for migration of existing workloads to a cloud vendor. Designing and developing cloud-specific security policies, standards and procedures. e.g., user account management (SSO, SAML), password/key management, tenant management, firewall management, virtual network access controls, VPN/SSL/IPSec, security incident and event management (SIEM), data protection (DLP, encryption). Documenting all technical issues, analysis, client communication, and resolution. Supporting proof of concept and production deployments of cloud technologies. Assisting clients with transitions to cloud via tenant setup, log processing setup, policy configuration, agent deployment, and reporting. Operating across both technical and management leadership capacities. Providing internal technical training to Advisory personnel as needed. Performing cloud orchestration and automation (Continuous Integration and Continuous Delivery (CI/CD)) in single and multi-tenant environments using tools like Terraform, Ansible, Puppet, Chef, Salt etc. Experience with multiple security technologies like CSPM, CWPP, WAF, CASB, IAM, SIEM, etc. Required Skills 4+ years of information technology and/or information security operations experience. Ideally 2+ years of working with different Cloud platforms (SaaS, PaaS, and IaaS) and environments (Public, Private, Hybrid). Familiarity with the following will be considered a plus: Solid understanding of enterprise-level directory and system configuration services (Active Directory, SCCM, LDAP, Exchange, SharePoint, M365) and how these integrate with cloud platforms Solid understanding of cloud security industry standards such as Cloud Security Alliance (CSA), ISO/IEC 27017 and NIST CSF and how they help in compliance for cloud providers and cloud customers Hands-on technical experience implementing security solutions for Microsoft Azure Knowledge of cloud orchestration and automation (Continuous Integration and Continuous Delivery (CI/CD)) in single and multi-tenant environments using tools like Terraform, Ansible, Puppet, Chef, Salt etc. Knowledge of cloud access security broker (CASB) and cloud workload protection platform (CWPP) technologies Solid understanding of OSI Model and TCP/IP protocol suite and network segmentation principles and how these can be applied on cloud platforms Preferred: Previous Consulting or Big 4 experience. Hands-on experience with Azure, plus any CASB or CWPP product or service. Understanding of Infrastructure-as-Code, and ability to create scripts using Terraform, ARM, Ansible etc. Knowledge of scripting languages (PowerShell, JSON, .NET, Python, Javascript etc.) Qualification Bachelor’s Degree required.Ideally in Computer Science, Cyber Security, Information Security, Engineering, Information Technology. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. Deloitte is committed to achieving diversity within its workforce, and encourages all qualified applicants to apply, irrespective of gender, age, sexual orientation, disability, culture, religious and ethnic background. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with Deloitte’s clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips Finding the right job and preparing for the recruitment process can be tricky. Check out tips from our Deloitte recruiting professionals to set yourself up for success. Check out recruiting tips from Deloitte recruiters . Benefits We believe that to be an undisputed leader in professional services, we should equip you with the resources that can make a positive impact on your well-being journey. Our vision is to create a leadership culture focused on the development and well-being of our people. Here are some of our benefits and programs to support you and your family’s well-being needs. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you . Our people and culture Our people and our culture make Deloitte a place where leaders thrive. Get an inside look at the rich diversity of background, education, and experiences of our people. What impact will you make? Check out our professionals’ career journeys and be inspired by their stories. Professional development You want to make an impact. And we want you to make it. We can help you do that by providing you the culture, training, resources, and opportunities to help you grow and succeed as a professional. Learn more about our commitment to developing our people . © 2023. See Terms of Use for more information. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306468

Posted 4 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What you'll do Manage system(s) uptime across cloud-native (AWS, GCP) and hybrid architectures. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Build automated tooling to deploy service requests to push a change into production. Build runbooks that are comprehensive and detailed to manage detect, remediate and restore services. Solve problems and triage complex distributed architecture service maps. On call for high severity application incidents and improving run books to improve MTTR Lead availability blameless postmortem and own the call to action to remediate recurrences. What experience you need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 5-7 years of experience in software engineering, systems administration, database administration, and networking 2+ years of experience developing and/or administering software in public cloud Cloud Certification Strongly Preferred Proficiency with continuous integration and continuous delivery tooling and practices System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives What could set you apart You have expertise designing, analyzing and troubleshooting large-scale distributed systems. You take a system problem-solving approach, coupled with strong communication skills and a sense of ownership and drive Kubernetes (CKA, CKAD) or cloud certifications. You are passionate for automation with a desire to eliminate toil whenever possible You’ve built software or maintained systems in a highly secure, regulated or compliant industry You thrive in and have experience and passion for working within a DevOps culture and as part of a team BS in Computer Science or related field. 2+ years of experience developing and/or administering software in public cloud 5+ years of programming experience (Python, Bash/Shell Script, Java, Go, etc.). 3+ years of experience monitoring infrastructure and application performance. 5+ years experience of system administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) 5+ years experience working with continuous integration and continuous delivery tooling and practices Kubernetes: Design, deploy, and manage production-ready Kubernetes clusters. Cloud Infrastructure: Build and maintain scalable infrastructure on GCP using tools like Terraform. Performance: Identify and resolve performance bottlenecks in applications and infrastructure. Observability: Implement monitoring and logging to proactively detect and resolve issues. Incident Response: Participate in on-call rotations, troubleshooting and resolving production incidents. Collaboration: Promote reliability best practices and ensure smooth deployments. Automation: Build CI/CD pipelines, automated tooling, and runbooks. Problem Solving: Triage complex issues, lead blameless postmortems, and drive remediation. Mentorship: Guide and mentor other SREs.

Posted 4 days ago

Apply

5.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes

Posted 4 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in Java & SQL 2+ years experience with Cloud technology: GCP, AWS, or Azure 2+ years experience designing and developing cloud-native solutions 2+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 3+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

The Database Engineer will be actively involved in the evaluation, review, and management of databases. You will be part of a team who supports a range of Applications and databases. You should be well versed in database administration which includes installation, performance tuning and troubleshooting. A strong candidate will be able to rapidly troubleshoot complex technical problems under pressure, implement solutions that are scalable, while managing multiple customer groups. What You Will Do Support large-scale enterprise data solutions with a focus on high availability, low latency and scalability. Provide documentation and automation capabilities for Disaster Recovery as part of application deployment. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Knowledge of the configuration of monitoring solutions and the creation of dashboards (DPA, DataDog, Big Panda, Prometheus, Grafana, Log Analytics, Chao Search) What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in database administration, system administration , performance tuning and automation. 1+ years of experience developing and/or administering software in public cloud Experience in managing Traditional databases like SQLServer/Oracle/Postgres/MySQL and providing 24*7 Support. Experience in implementing and managing Infrastructure as Code (e.g. Terraform, Python, Chef) and source code repository (GitHub). Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience in designing and building production data pipelines from data ingestion to consumption within a hybrid big data architecture, using Cloud Native GCP, Java, Python, Scala, SQL etc. Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Automation - Uses knowledge of best practices in coding to build pipelines for build, test and deployment of processes/components; Understand technology trends and use knowledge to identify factors that can be used to automate system/process deployments Data / Database Management - Uses knowledge of Database operations and applies engineering skills to improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies programs/scripts and integrated software services; Applies industry best standards and tools to achieve a well-engineered result. Operational Excellence - Prioritizes and organizes own work; Monitors and measures systems against key metrics to ensure availability of systems; Identifies new ways of working to make processes run smoother and faster Technical Communication/Presentation - Explains technical information and the impacts to stakeholders and articulates the case for action; Demonstrates strong written and verbal communication skills Troubleshooting - Applies a methodical approach to routine issue definition and resolution; Monitors actions to investigate and resolve problems in systems, processes and services; Determines problem fixes/remedies. Assists with the implementation of agreed remedies and preventative measures; Analyzes patterns and trends

Posted 4 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. Git, Github) and build tools like Maven & Gradle. Relational databases (e.g. SQL Server, Oracle) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.8+) Automated Testing: JUnit, SoapUI

Posted 4 days ago

Apply

9.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description Your Impact: The Specialist would bring hands-on technological expertise, passion, and innovation to the table. Will be responsible for designing and enabling Application support, and handling Production farms and various Infrastructure platforms for different delivery teams. In the capacity of a subject matter, experts will be responsible as a system architecture to design and build scalable and efficient Infrastructure Platforms At the same time, specialists will also be responsible for establishing best practices, cultivating thought leadership, and developing common practices/ solutions on Infrastructure. Qualifications Your Skills & Experience: 9 to 13 years of experience in DevOps with a bachelor s in engineering/Technology Or master s in engineering/Computer Applications Expertise in DevOps & Cloud tools: Cloud-AWS Version Control (Git, Gitlab, GitHub) Hands-on experience in Container Infrastructure ( Docker, Kubernetes, Hosted solutions) Ability to define container-based environment topology following principles of designing a well-architected framework. Be able to Design and implement advanced aspects using Service Mesh technologies like Istio, Linkerd, Kuma, etc Infrastructure Automation (Chef/Puppet/Ansible, Terraform, ARM, Cloud Formation) Build tools (Ant, Maven, Make, Gradle) Artifact repositories (Nexus, JFrog Artifactory) CI/CD tools on-premises/cloud (Jenkins, TeamCity) Monitoring, Logging, and Security (CloudWatch, cloud trail, log analytics, hosted tools such as ELK, EFK, Splunk, Prometheus, OWASP, SAST, and DAST) Scripting languages: Python, Ant, Bash, and Shell Hands-on experience in designing pipelines & pipelines as code. Hands-on experience in end-to-end deployment process & strategy Good exposure to tools and technologies used in building a container-based infrastructure. Hands-on experience of GCP/AWS/AZURE with a good understanding of computing, networks, IAM, Security, and integration services with production knowledge on Implementing strategies for reliability requirements Ensuring business continuity Meeting performance objectives Security requirements and controls Deployment strategies for business requirements Cost optimization etc Responsible for managing Installation, configuration, automation, performance, monitoring, Capacity planning, and Availability Management of various Servers and Databases. An expert in automation skills Knowledge of load balancing, CDN options provided by multiple cloud vendors (E.g. Load balancer and Application gateway in Azure, ELB, and ALB in AWS) Good knowledge of network algorithms on failover and availability. Capability to write complex code e.g., automation of recurring/mundane tasks, OS administration (CPU, memory, network performance troubleshooting), also demonstrates strong troubleshooting skills Demonstrates HA/DR design on Cloud platform as per SLAs/RTO/RPO Good knowledge of migrations tools available with cloud vendors and independent providers Set Yourself Apart With The capability of estimating the setup time required for Infrastructure and build & release activities. Good Working Knowledge of the Linux Operating System Skill development, knowledge base creation, and toolset optimization of the Practice. Handling Content Delivery Network and Performing root cause analysis. Understanding of any one of DBMS like MySQL, Oracle, or No SQL like Cassandra, MongoDB, etc. Capacity Planning and Infrastructure estimations. Working understanding of scripting in any one of the languages: BASH/Python/Perl/Ruby Certification in any cloud (Architect or Professional) Additional Information Gender-Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

Posted 4 days ago

Apply

0.0 - 2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are looking for a talented Backend Developer to join our dynamic team at Max Healthcare in India and help build scalable, high-performance Mobile applications. Requirements: 0-2 years of software engineering experience Strong problem-solving skills and the ability to work in a dynamic, fast-paced environment. Full stack development experience, with a focus on backend-end technologies (80/20 split). We mostly write in node js but are flexible in our approach. In-depth knowledge of AWS. Exposure to Terraform is a plus. Excellent communication and teamwork skills.

Posted 4 days ago

Apply

7.0 - 9.0 years

0 - 0 Lacs

pune, mumbai city

Remote

Job Description: We are seeking a skilled Data Engineer with 7+ years of experience in data processing, ETL pipelines, and cloud-based data solutions. The ideal candidate will have strong expertise in AWS Glue, Redshift, S3, EMR, and Lambda , with hands-on experience using Python and PySpark for large-scale data transformations. The candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support analytics and data-driven decision-making. Additionally, need to have strong expertise in Terraform and Git-based CI/CD pipelines to support infrastructure automation and configuration management. Key Responsibilities: ETL Development & Automation: Design and implement ETL pipelines using AWS Glue and PySpark to transform raw data into consumable formats. Automate data processing workflows using AWS Lambda and Step Functions. Data Integration & Storage: Integrate and ingest data from various sources into Amazon S3 and Redshift. Optimize Redshift for query performance and cost efficiency. Data Processing & Analytics: Use AWS EMR and PySpark for large-scale data processing and complex transformations. Build and manage data lakes on Amazon S3 for analytics use cases. Monitoring & Optimization: Monitor and troubleshoot data pipelines to ensure high availability and performance. Implement best practices for cost optimization and performance tuning in Redshift, Glue, and EMR. Terraform & Git-based Workflows: Design and implement Terraform modules to provision cloud infrastructure across AWS/Azure/GCP. Manage and optimize CI/CD pipelines using Git-based workflows (e.g., GitHub Actions, GitLab CI, Jenkins, Azure DevOps). Collaborate with developers and cloud architects to automate infrastructure provisioning and deployments. Write reusable and scalable Terraform modules following best practices and code quality standards. Maintain version control, branching strategies, and code promotion processes in Git. Collaboration: Work closely with stakeholders to understand requirements and deliver solutions. Document data workflows, designs, and processes for future reference. Must-Have Skills: Strong proficiency in Python and PySpark for data engineering tasks. Hands-on experience with AWS Glue, Redshift, S3, and EMR . Expertise in building, deploying, and optimizing data pipelines and workflows. Solid understanding of SQL and databas optimization techniques. Strong hands-on experience with Terraform , including writing and managing modules, state files, and workspaces. Proficient in CI/CD pipeline design and maintenance using tools like: GitHub Actions / GitLab CI / Jenkins / Azure DevOps Pipelines Deep understanding of Git workflows (e.g., GitFlow, trunk-based development). Experience in serverless architecture using AWS Lambda for automation and orchestration. Knowledge of data modeling, partitioning, and schema design for data lakes and warehouses.

Posted 4 days ago

Apply

0 years

0 Lacs

Saket, Delhi, India

On-site

Roles and Responsibilities: ● Design, develop, and maintain critical software in a fast-paced quality-conscious environment ● Quickly understand complex systems/code and own key pieces of the system, including the delivered quality ● Diagnose and troubleshoot complex problems in a distributed computing environment ● Work alongside other Engineers and cross functional teams to diagnose/troubleshoot any production performance related issues ● Work in Python, Shell and built systems on Docker ● Defining and setting development, test, release, update, and support processes for DevOps operation ● Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management ● Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) ● Managing periodic reporting on the progress to the management and the customer Skills: ● Familiarity with scripting languages: python, shell scripting ● Proper understanding of networking and security protocols (HTTPS, SSL, Certs) ● Experience in building containers and container orchestration applications (K8S/ECS/ Docker) ● Experience working on Linux based infrastructure, GIT, CI/CD Tools, Jenkins, Terraform ● Configuration and managing databases such as MySQL, PostgreSQL, Mongo ● Working knowledge of various tools, open-source technologies, and cloud services (AWS preferably)

Posted 4 days ago

Apply

7.0 - 10.0 years

0 - 0 Lacs

pune, mumbai city

Remote

Position - AWS Data Engineer Job Description: We are seeking a skilled Data Engineer with 7+ years of experience in data processing, ETL pipelines, and cloud-based data solutions. The ideal candidate will have strong expertise in AWS Glue, Redshift, S3, EMR, and Lambda , with hands-on experience using Python and PySpark for large-scale data transformations. The candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support analytics and data-driven decision-making. Additionally, need to have strong expertise in Terraform and Git-based CI/CD pipelines to support infrastructure automation and configuration management. Key Responsibilities: ETL Development & Automation: Design and implement ETL pipelines using AWS Glue and PySpark to transform raw data into consumable formats. Automate data processing workflows using AWS Lambda and Step Functions. Data Integration & Storage: Integrate and ingest data from various sources into Amazon S3 and Redshift. Optimize Redshift for query performance and cost efficiency. Data Processing & Analytics: Use AWS EMR and PySpark for large-scale data processing and complex transformations. Build and manage data lakes on Amazon S3 for analytics use cases. Monitoring & Optimization: Monitor and troubleshoot data pipelines to ensure high availability and performance. Implement best practices for cost optimization and performance tuning in Redshift, Glue, and EMR. Terraform & Git-based Workflows: Design and implement Terraform modules to provision cloud infrastructure across AWS/Azure/GCP. Manage and optimize CI/CD pipelines using Git-based workflows (e.g., GitHub Actions, GitLab CI, Jenkins, Azure DevOps). Collaborate with developers and cloud architects to automate infrastructure provisioning and deployments. Write reusable and scalable Terraform modules following best practices and code quality standards. Maintain version control, branching strategies, and code promotion processes in Git. Collaboration: Work closely with stakeholders to understand requirements and deliver solutions. Document data workflows, designs, and processes for future reference. Must-Have Skills: Strong proficiency in Python and PySpark for data engineering tasks. Hands-on experience with AWS Glue, Redshift, S3, and EMR . Expertise in building, deploying, and optimizing data pipelines and workflows. Solid understanding of SQL and databas optimization techniques. Strong hands-on experience with Terraform , including writing and managing modules, state files, and workspaces. Proficient in CI/CD pipeline design and maintenance using tools like: GitHub Actions / GitLab CI / Jenkins / Azure DevOps Pipelines Deep understanding of Git workflows (e.g., GitFlow, trunk-based development). Experience in serverless architecture using AWS Lambda for automation and orchestration. Knowledge of data modeling, partitioning, and schema design for data lakes and warehouses.

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Overview: We seek a talented DevOps Specialist to join our dynamic team. The ideal candidate will have strong expertise in AzureDevOps, Kubernetes, Azure, and scripting languages like PowerShell, YAML, Python, and shell scripting. As a DevOps Specialist, an ideal candidate will be crucial in designing, implementing, and maintaining our CI/CD pipelines, infrastructure, and deployment strategies. Key Responsibilities: Design, implement, and maintain CI/CD pipelines utilizing AzureDevOps. Monitor and enhance application and infrastructure security within Azure environments. Enable automated testing using AzureDevOps and SonarQube for code quality management. Collaborate with development and operations teams to streamline and automate workflows. Troubleshoot and resolve issues in development, test, and production environments. Develop infrastructure as code (IaC) using Terraform for deployment and configuration management in Azure. Continuously evaluate and implement improvements to optimize performance, scalability, and efficiency. Required Skills and Experience: Proven experience with Azure DevOps for CI/CD pipelines. Proficiency in scripting languages like PowerShell, YAML, Python, and shell scripting. Experience with containerization technologies (Docker, Kubernetes). Ability to troubleshoot and provide solutions around VMs, OS(Linux/Windows), and Networking Solid understanding of DevOps best practices and methodologies. Ability to troubleshoot complex issues and provide effective solutions. Excellent communication and collaboration skills, with the ability to work effectively in a team environment. Preferred Skills: Certification in Azure (e.g., Azure Administrator Associate, Azure DevOps Engineer Expert). Experience integrating and configuring SonarQube for code quality assessment. Strong proficiency in Terraform for infrastructure provisioning and management in Azure. Knowledge of agile development methodologies.

Posted 4 days ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Offres d'emploi Les Secteurs Industrie Numérique Santé Transition écologique Agriculture Rejoindre la Mission French Tech Découvrir les métiers de la Tech Associate System Platform Engineer Noida Full-Time Apply Now About Welcome to Maison Brevo ! 🌱 Our office located in Paris 17ème - broken down into four different key themes: Ocean, Earth, Sky and Space - bringing together our employees every day ! Brevo’s goal and mission as a Customer Relationship Management (CRM), is to enable millions of organisations, from startups to global enterprises all over the world, to connect with people using technology for their success. We are proud to share that we just obtained B Corp certification, which reflects our commitment to building a high-performing and responsible business ! Job Description Brevo is the leading and fast-growing Customer Relationship Management (CRM) suite designed to enable millions of organizations to connect with people using technology for their success. Our platform gives businesses a unified view of the entire customer journey, empowering them to grow with intuitive marketing and sales tools, including Marketing Automation, Email, SMS, WhatsApp, Chat, and much more. As a proud B Corp certified company, we are committed not only to performance but also to purpose—meeting high standards of social and environmental impact. Today, more than 500,000 businesses across 180 countries, including Louis Vuitton, Carrefour, eBay, and Michelin, trust Brevo’s reliable technology and 75+ integrations to deliver unparalleled customer experiences, reduce costs, and drive sales. Brevo reached €179M ARR in 2024 (35% growth year on year) and has close to 1,000 employees globally. We are looking for an Associate System Platform Engineer to join our dynamic team. The ideal candidate will have some basic Linux and network knowledge, and a lot of curiosity to learn how to manage and automate huge infrastructures (1500+ servers, own network backbone) in a distributed system environment. As An Associate Platform Engineer, Your Will Participate in building, maintaining scalable and efficient system and network platforms using automation tools like Terraform and Ansible. Monitor and optimise performance to ensure high availability and reliability using Datadog. Manage and maintain system and network part deployment solutions. Collaborate with internal teams to understand hosting requirements and deliver solutions that meet business needs. Participate in automation for deployment pipelines to improve efficiency and reduce manual intervention. Help to diagnose and resolve system and network issues, ensuring minimal disruption to services. Maintain comprehensive documentation of architecture, processes, and procedures. What Will Contribute To Your Success Minimum experience between 1-3 years Basic knowledge of Network / System (DNS/Process/Disk/Network/Memory) Basic knowledge of scripting languages such as Python, Bash, or similar for automation tasks. Experience in system and network management, including installation, configuration, performance tuning, and monitoring. Basic knowledge of Linux. Knowledge of an automation tool is a strong plus What We Offer A unique opportunity to join an international and collaborative startup environment in a hyper-growth context Hybrid working with 2 days work from home. The chance to grow your professional and technical skills, with real room for career progression A modern office in a central location with free fruits & drinks & a lot of fun activities Amazing referral program where employees can choose a gift item of 1.5 Lac including a bike, flight tickets, and many more. 1.4x times your day salary if you're working on any week off or holiday due to critical tasks/issues An umbrella of leaves and holidays Budget to support your workspace at home Medical Insurance of INR 10 Lacs is borne by the company An employee-friendly compensation structure that includes Tax saving optional components where the employee can save extra tax Bi-annual global company offsite; inter-office trips. Virtual Festival and birthday celebrations, Team parties, & team-building outings Meet us! Round 1 - Screening call with the Talent Acquisition team. Round 2 - Interview with our Deputy Platform Manager Round 3 - Interview with the Hiring Manager. Round 4 - Round with the VP of Platform. Final - Cultural Fitment Round with the Talent Acquisition team. Brevo puts diversity and inclusion at the heart of its values. We examine all applications with treatment based on equal skills and applying the principles of non-discrimination. Additional Information Contract Type: Full-Time Location: Noida Possible partial remote Apply Now See Other Brevo Job Listings

Posted 4 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dear Candidates, Greetings from TCS!!!! TCS is looking for GCP Network Engineer Experience: 8 + Years of Experience with exposure to GCP Networking, IaaC Terraform Location: Chennai / Hyderabad / Bangalore / Pune / Gurgaon Requirements: GCP 1 - Product & Environment Provisioning - GCP, Jenkins, Groovy, GITOPS, SMEE, GKE, container lifecycle, Terraform & Terraform Cloud, Python, Scripting, ITIL, Incident management, Problem management GCP 1 & 2 - Networking - GCP, IPAM, DNS, Router. Interconnect, VPN, Terraform & Terraform Cloud, Python, Scripting, ITIL, Incident management, Problem management Good to have skills: Network Certification

Posted 4 days ago

Apply

4.0 years

0 Lacs

India

Remote

Job Title: Monitoring & Observability Engineer – Datadog Specialist Experience: 4+ Years Location: [Specify Location or Remote] Job Type: Full-Time Job Summary: We are looking for a talented Observability Engineer with hands-on experience in Datadog to enhance our infrastructure and application monitoring capabilities. The ideal candidate will have a strong understanding of performance monitoring, alerting, and observability in cloud-native environments. Key Responsibilities: Design, implement, and maintain observability solutions using Datadog for applications, infrastructure, and cloud services. Set up dashboards, monitors, and alerts to proactively detect and resolve system issues. Collaborate with DevOps, SRE, and application teams to define SLOs, SLIs, and KPIs for performance monitoring. Integrate Datadog with services such as AWS, Kubernetes, CI/CD pipelines, and logging tools. Conduct performance tuning and root cause analysis of production incidents. Automate observability processes using infrastructure-as-code and scripting (e.g., Terraform, Python). Stay up-to-date with the latest features and best practices in Datadog and observability space. Must-Have Skills: 4+ years of experience in monitoring/observability, with 2+ years hands-on experience in Datadog Strong experience with Datadog APM, infrastructure monitoring, custom metrics, and dashboards Familiarity with cloud platforms like AWS, GCP, or Azure Experience monitoring Kubernetes, containers, and microservices Good knowledge of log management, tracing, and alert tuning Proficient with scripting (Python, Shell) and IaC tools (Terraform preferred) Solid understanding of DevOps/SRE practices and incident management Nice-to-Have Skills: Datadog certifications (e.g., Datadog Certified Observability Engineer) Experience integrating Datadog with CI/CD tools, ticketing systems, and chatops Familiarity with other monitoring tools (e.g., Prometheus, Grafana, New Relic, Splunk) Knowledge of performance testing tools (e.g., JMeter, k6)

Posted 4 days ago

Apply

0 years

0 Lacs

India

Remote

We're Hiring: AWS DevOps Engineer Intern Location:Remote Duration:6 months Salary : unpaid We're looking for a motivated DevOps Intern to join our cloud infrastructure team. You'll gain hands-on experience with AWS services , CI/CD pipelines , Docker , Terraform , and more—supporting real-world deployments and automation tasks. What You’ll Work On: Deploying & managing AWS infrastructure (EC2, S3, IAM, etc.) Building CI/CD pipelines (GitHub Actions, Jenkins, CodePipeline) Writing automation scripts & Infrastructure as Code (Terraform/CloudFormation) Containerization with Docker/Kubernetes Monitoring with CloudWatch, Prometheus, etc. Understanding of monitoring/logging tools (e.g., ELK, Datadog, CloudWatch) What We’re Looking For: Familiarity with AWS basics & Linux Understanding of Git and DevOps concepts Eagerness to learn cloud tools & best practices A great opportunity to learn, build, and grow with our experienced DevOps team. Interested? Apply now or reach out at career@priyaqubit.com

Posted 4 days ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

Delhi, Delhi

On-site

We're Hiring: IT Recruiter (2 to 5 Years Experience) Are you a tech-savvy recruiter with a passion for finding the right talent in a fast-paced IT world? We’re looking for someone just like you! What You’ll Do: * Partner with hiring managers to understand job requirements and team dynamics * Source & screen candidates via LinkedIn, portals, referrals, and internal databases * Conduct initial technical assessments for role suitability * Build strong pipelines across key tech domains: Programming: Java, Python, .NET, JavaScript, Node.js, React, Angular Cloud: AWS, Azure, GCP DevOps: Jenkins, Docker, Kubernetes, Terraform, Ansible Data: SQL, NoSQL, Hadoop, Spark, Power BI, Tableau ERP/CRM: SAP, Salesforce Testing: Manual, Automation, Selenium, API Others: Finacle, Murex, Oracle, Unix, PLSQL * Coordinate interviews & ensure smooth candidate experience * Maintain ATS records accurately * Share market insights with hiring managers * Constantly refine sourcing strategies based on trends and data What We’re Looking For: * Bachelor’s degree (technical background a plus) * 2 to 5 years of IT recruitment experience (corporate/agency) * Strong knowledge of tech stacks & IT hiring practices * Excellent communication & stakeholder management * A sharp eye for both technical and cultural fit * Proficiency in ATS, job portals, and LinkedIn Recruiter Apply Now: hr@virtueevarsity.com / 9958100227 Let’s connect and build something impactful together! Job Type: Permanent Pay: ₹30,000.00 - ₹40,000.00 per month Benefits: Health insurance Provident Fund Application Question(s): Work Location - Bangalore, Bhopal and Delhi Experience: IT Recruiter: 2 years (Required) Work Location: In person

Posted 4 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description We are seeking a highly motivated and customer-focused Associate Technology L2 to join our Sustain team. The This role involves providing first-line technical support to end-users, logging incidents, and ensuring timely resolution or escalation of issues. The candidate should demonstrate patience, empathy, and a passion for delivering exceptional customer service. Qualifications Your Skills & Experience: Experience with implementation of Cloud Observability such as Azure or AWS or GCP Experience with scripting languages like Java, Python, Bash, PowerShell. Experience with REST, SOAP, JSON and XML is helpful. Experience with Terraform is preferred Experience with Container technologies such as Kubernetes, Docker and knowledge of public cloud Azure, AWS, GCP, etc. Knowledge of configuring public cloud platforms using Code such as Terraform. Knowledge of IT protocols such as HTTP, ICMP, SNMP, WMI, syslog-ng, SSH, etc. Understand Databases and Database Performance measurements(e.g. MSSQL, MySQL etc.) Knowledge of software development life cycle and Agile methodology (e.g. use of tools like Jira) Experience with enterprise tools like ServiceNow, Jira, etc. ITSM Process experience (e.g. Change Management, Incident Management etc.) Flexible to work in shifts Familiarity with Linux and experience working in a shell environment. Additional Information Gender-Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well-being. Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

Posted 4 days ago

Apply

5.0 - 7.0 years

25 - 28 Lacs

Pune, Maharashtra, India

On-site

Job Description We are looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Core Responsibilities Design, build, and maintain robust data pipelines (batch or streaming) that process and transform data from diverse sources. Ensure data quality, reliability, and availability across the pipeline lifecycle. Collaborate with product managers, architects, and engineering leads to define technical strategy. Participate in code reviews, testing, and deployment processes to maintain high standards. Own smaller components of the data platform or pipelines and take end-to-end responsibility. Continuously identify and resolve performance bottlenecks in data pipelines. Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. Required Qualifications 5 to 7 years of experience in Big Data or data engineering roles. JVM based languages like Java or Scala are preferred. For someone having solid Big Data experience, Python would also be OK. Proven and demonstrated experience working with distributed Big Data tools and processing frameworks like Apache Spark or equivalent (for processing), Kafka or Flink (for streaming), and Airflow or equivalent (for orchestration). Familiarity with cloud platforms (e.g., AWS, GCP, or Azure), including services like S3, Glue, BigQuery, or EMR. Ability to write clean, efficient, and maintainable code. Good understanding of data structures, algorithms, and object-oriented programming. Tooling & Ecosystem Use of version control (e.g., Git) and CI/CD tools. Experience with data orchestration tools (Airflow, Dagster, etc.). Understanding of file formats like Parquet, Avro, ORC, and JSON. Basic exposure to containerization (Docker) or infrastructure-as-code (Terraform is a plus). Skills: airflow,pipelines,data engineering,scala,ci,python,flink,aws,data orchestration,java,kafka,gcp,parquet,orc,azure,cd,dagster,ci/cd,git,avro,terraform,json,docker,apache spark,big data

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies