Home
Jobs

7980 Terraform Jobs - Page 27

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

India

Remote

Linkedin logo

Company: Volga Partners Location: Fully Remote (Occasional office visits 1–2 times/year) Compensation: ₹17–19 LPA (based on experience) Job Type: Full-time Start Date: Urgently hiring — must be available to start within 3–4 weeks Work Hours: 8 AM – 5 PM IST or 9 AM – 6 PM IST (Occasional collaboration with USA Pacific Time teams) About the Role Volga Partners is looking for a skilled Software Development Engineer with strong expertise in C#/C++, Python, and debugging across both web and native technology stacks. You’ll join a high-performing remote team, solving complex technical problems, optimizing performance, and delivering robust, scalable software solutions. Key Responsibilities Maintain and enhance a large, multi-language codebase (C#/C++, HTML/JavaScript) across multiple branches. Debug complex issues spanning native and web stacks and provide efficient, long-term solutions. Collaborate with cross-functional teams to resolve merge conflicts and ensure code quality. Monitor application performance using telemetry tools and proactively address bottlenecks. Automate recurring tasks related to code management, maintenance, and reporting. What You BringEducation Bachelor's degree in computer science or a related field. Experience At least 2+ years of hands-on professional experience. Proficient in C#/C++ , Python , and full-stack debugging. Experienced with Git-based workflows and working within large codebases. Technical SkillsetLanguages C#, C/C++, Python, HTML, CSS, JavaScript, TypeScript, SQL Frameworks & Libraries .NET Web API, ASP.NET (MVC & WebForms), Entity Framework, LINQ, Dapper React, jQuery, Bootstrap, Ajax xUnit, Moq, FluentValidation Tools & Platforms Azure DevOps, Git, Docker, Terraform, SQL Server, IIS Elastic Stack, Postman, Swagger, Visual Studio, VS Code, Figma, AWS Architecture Microservices, Modular Monolith Operating Systems Windows, Linux Build Tools Make, Ninja (preferred) Why Join Us 💻 Remote-first culture with rare in-person meetings 🌍 Work with global teams and modern tech Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Anupgarh, Rajasthan, India

On-site

Linkedin logo

34129BR Bangalore - Global Axis Job Description Job Description: Senior Cloud & AI Engineer Role Overview We are seeking a highly skilled Senior Cloud & AI Engineer to design, build, and optimize cloud-based AI/ML platforms. You will drive the platformization of our cloud infrastructure, leveraging Azure, OpenAI, and MLOps to deploy scalable AI solutions. This role is critical for advancing our AI capabilities and enabling seamless integration of tools like Copilot to enhance productivity and innovation. Exposure to AI Foundry preferred. Key Responsibilities Cloud Platform Development: Architect, implement, and manage Azure-based cloud platforms to support AI/ML workloads at scale. AI/ML Solutions: Design and deploy generative AI models (OpenAI) and end-to-end ML pipelines, ensuring robust MLOps practices for monitoring, testing, and CI/CD. Copilot Integration: Develop and optimize Copilot-based tools to automate code generation, data analysis, and workflow efficiency. Automation & Optimization: Script infrastructure-as-code (IaC) using Python, automate cloud resource management, and enhance platform reliability. Cross-functional Collaboration: Partner with data scientists and developers to operationalize AI models and ensure alignment with business goals. Best Practices: Establish governance, security, and cost-optimization strategies for cloud and AI resources. Required Skills & Qualifications Cloud Expertise: 5+ years in cloud engineering with Azure (IaaS/PaaS, AKS, Databricks, Synapse) and a focus on cloud platformization. AI/ML Proficiency: Hands-on experience with OpenAI, generative AI, and AI/ML Ops (e.g., MLflow, Azure ML). Programming: Advanced Python skills for automation, API development, and scripting. Copilot Tools: Proven ability to implement Copilot (GitHub/GitLab Copilot, Azure Copilot) for code/process automation. DevOps: Experience with CI/CD (Azure DevOps), containerization (Docker/Kubernetes), and IaC (Terraform/Bicep). Analytical Skills: Ability to troubleshoot complex systems and optimize performance. Preferred Qualifications Azure certifications (e.g., Azure Solutions Architect, Azure AI Engineer). Familiarity with big data tools (Spark, Delta Lake) and LLM fine-tuning. Knowledge of other cloud platforms (AWS/GCP) or AI frameworks (LangChain, Hugging Face). Education Bachelor’s/Master’s in Computer Science, Engineering, or related field. Qualifications B.Tech Range of Year Experience-Min Year 7 Range of Year Experience-Max Year 15 Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Ganganagar, Rajasthan, India

On-site

Linkedin logo

32780BR Noida Job Description DevOps JD- Architectural Design Design and architect scalable, secure, and resilient cloud infrastructure solutions on AWS. Develop architecture blueprints and detailed documentation. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. DevOps Implementation: Implement and manage CI/CD pipelines to automate deployment and integration processes. Ensure smooth integration of development, operations, and testing functions. Implement infrastructure as code (IaC) using tools such as Terraform, CloudFormation, or AWS CDK. Cloud Management: Monitor and optimize the performance, scalability, and cost-efficiency of AWS environments. Implement security best practices and ensure compliance with security standards and policies. Manage backup, recovery, and disaster recovery processes. Collaboration and Leadership: Work closely with development teams to ensure that solutions are designed with scalability, security, and performance in mind. Mentor and guide junior engineers, providing technical leadership and guidance. Stay updated with the latest industry trends and AWS services, and advocate for best practices within the organization. Problem Solving: Troubleshoot and resolve issues related to cloud infrastructure and services. Perform root cause analysis for incidents and implement preventive measures. Preferred Skills Experience with serverless architecture and microservices Familiarity with other cloud platforms such as Azure or Google Cloud. Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack, or AWS CloudWatch. Understanding of agile methodologies and experience working in agile environments. Qualifications BE Range of Year Experience-Min Year 4 Range of Year Experience-Max Year 9 Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Anupgarh, Rajasthan, India

On-site

Linkedin logo

34130BR Bangalore - Global Axis Job Description Job Title: Azure Databricks Administrator Job Summary We are seeking a skilled and proactive Azure Databricks Administrator to manage, monitor, and support our Databricks environment on Microsoft Azure. The ideal candidate will be responsible for system integrations, access control, user support, and CI/CD pipeline administration, ensuring a secure, efficient, and scalable data platform. Key Responsibilities System Integration & Monitoring: Build, monitor, and support integrations between Databricks and enterprise systems such as LogRhythm, ServiceNow, and AppDynamics. Ensure seamless data flow and alerting mechanisms across integrated platforms. Security & Access Management: Administer user and group access to the Databricks environment. Implement and enforce security policies and role-based access controls (RBAC). User Support & Enablement: Provide initial system support and act as a point of contact (POC) for Databricks users. Assist users with onboarding, workspace setup, and troubleshooting. Vendor Coordination: Engage with Databricks vendor support for issue resolution and platform optimization. Platform Monitoring & Maintenance: Monitor Databricks usage, performance, and cost. Ensure the platform is up-to-date with the latest patches and features. Database & CI/CD Administration: Manage Databricks database configurations and performance tuning. Administer and maintain CI/CD pipelines for Databricks notebooks and jobs. Required Skills & Qualifications Proven experience administering Azure Databricks in a production environment. Strong understanding of Azure services, data engineering workflows, and DevOps practices. Experience with integration tools and platforms like LogRhythm, ServiceNow, and AppDynamics. Proficiency in CI/CD tools (e.g., Azure DevOps, GitHub Actions). Familiarity with Databricks REST APIs, Terraform, or ARM templates is a plus. Excellent problem-solving, communication, and documentation skills. Preferred Certifications Microsoft Certified: Azure Administrator Associate Databricks Certification Azure Data Engineer Associate Qualifications B.Tech Range of Year Experience-Min Year 7 Range of Year Experience-Max Year 14 Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

This is a remote position. Job Title : Senior Java Developer Location : Bangalore (Remote / Hybrid) Vertical : Cloud Engineering Experience Level : 5+ Years Employment Type : Full time About the Role: We are looking for a highly skilled Senior Java Developer to join our dynamic and growing technology team. In this role, you will play a key part in designing, developing, and deploying scalable and high-performance applications using modern technologies. You will work on challenging projects involving headless architecture, containerization, cloud platforms, and CI/CD automation, while collaborating closely with cross-functional teams. Key Responsibilities: Design, develop, and maintain scalable backend services using Java 17, Spring Framework, and MySQL. Build and manage front-end components using React.js within a headless architecture setup. Implement and manage cloud-based solutions using AWS, Azure, or GCP. Ensure efficient build and deployment pipelines using CI/CD tools, Maven, and Git. Containerize and deploy applications using Docker and manage deployment environments effectively. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Drive technical discussions, mentor junior developers, and contribute to architectural decisions. Requirements 5+ years of experience in Java (preferably Java 17) and Spring ecosystem. Strong experience with MySQL or other relational databases. Proficiency in React.js and understanding of headless architecture. Hands-on experience with CI/CD pipelines, Maven, Git, and containerization (Docker). Proven experience with at least one major cloud platform (AWS/Azure/GCP). Solid understanding of RESTful APIs and microservices architecture. Nice to Have: Elastic Path certification or relevant training. Experience integrating with payment gateways, PIM systems, and OMS platforms. Familiarity with DevOps tools such as Jenkins, Kubernetes, or Terraform. Experience working in Agile methodologies such as Scrum or Kanban Soft Skills: Strong verbal and written communication skills. Collaborative mindset with the ability to lead and influence technical discussions. Analytical and problem-solving approach with a focus on delivering business value. Excellent organizational skills and attention to details. Benefits Competitive salary and benefits package. Opportunities for professional growth and certification. Collaborative, inclusive, and fast-paced work environment. Flexible working hours and remote work options. Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Levo.ai Levo.ai is an advanced and automated API security platform designed to proactively discover, document, monitor, and mitigate security risks. By leveraging an in-depth automated offensive security testing approach, Levo ensures that vulnerabilities are identified and addressed before they can be exploited. Our platform is fully aligned with essential security practices endorsed by OWASP, as well as compliance frameworks such as PCI, GDPR, and HIPAA. Traditional security methods often rely on manual processes, which are not only costly and error-prone but also leave organizations exposed to significant security risks. Levo revolutionizes this process by providing comprehensive visibility and testing coverage, reducing both cost and complexity while eliminating false positives. Our seamless integration between development and security teams ensures that security does not become a bottleneck but rather an enabler of innovation. Founded in 2021 by Buchi Reddy, a former engineering leader at AppDynamics, Cisco, and D.E. Shaw, Levo is backed by a team with extensive enterprise security and engineering expertise. Our mission is to ensure that business growth, deployment deadlines, security, and compliance are not conflicting priorities but rather integrated seamlessly into the software development lifecycle. Job Overview Levo.ai is seeking an experienced and highly motivated DevOps Engineer to join our team in Hyderabad. This role is critical in managing cloud infrastructure, optimizing deployment processes, ensuring system reliability, and enhancing operational efficiency. The ideal candidate will have extensive hands-on experience in operating platforms at scale, platform engineer, cloud computing, automation, CI/CD pipelines, container orchestration, and monitoring tools. Key Responsibilities Cloud Infrastructure Management: Design, deploy, and manage highly available and secure cloud environments using Kubernetes in AWS, Azure, or Google Cloud. CI/CD Pipeline Development: Build and maintain automated deployment pipelines using GitHub, GitLab CI/CD, or similar tools to streamline software releases. Infrastructure as Code (IaC): Automate infrastructure provisioning and management using Terraform, CloudFormation, or Ansible. Containerization & Orchestration: Deploy, scale, and manage containerized applications using Docker and Kubernetes. Monitoring & Logging: Implement robust monitoring and logging solutions with Prometheus, Grafana, ELK stack, or Datadog to ensure system performance and security. Security & Compliance: Enforce cloud security best practices, including Identity and Access Management (IAM), Role-Based Access Control (RBAC), and compliance with regulatory frameworks. Automation & Scripting: Develop scripts in Python, Bash, or PowerShell to automate operational tasks and improve system efficiency. Troubleshooting & Incident Management: Identify, diagnose, and resolve infrastructure issues to ensure high availability and system reliability. Collaboration & Optimization: Work closely with software developers, AI/ML teams, and security engineers to optimize workflows and improve overall efficiency. Requirements & Qualifications Minimum of 2+ years of experience in DevOps, Site Reliability Engineering (SRE), or Cloud Engineering roles. 2 years of experience in deploying and operating distributed systems in production at scale. Strong hands-on experience with AWS, Azure, or Google Cloud services. Proficiency in CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or CircleCI. Deep understanding of Infrastructure as **Code (IaC) with Terraform, CloudFormation, or Ansible. Expertise in containerization (Docker) and orchestration (Kubernetes).** Strong experience with monitoring and logging tools such as Prometheus, Grafana, ELK stack, or Datadog. Solid understanding of cloud security principles, including IAM, RBAC, and compliance frameworks. Proficiency in scripting languages such as Python, Bash, or PowerShell for automation. Strong problem-solving skills with the ability to work in a** fast-paced, dynamic environment.** Excellent communication and collaboration skills to work effectively across teams. Preferred Qualifications Certifications: AWS Certified DevOps Engineer, Certified Kubernetes Administrator (CKA), or equivalent. Experience in AI/ML environments with MLOps best practices is a huge plus. Exposure to serverless computing and event-driven architectures. Prior experience working in early-stage startups or high-growth technology companies. What We Offer Competitive compensation and benefits package. Career growth opportunities in a rapidly expanding AI-driven company. A collaborative and innovative work culture that values autonomy and impact. The opportunity to work with cutting-edge technologies and solve real-world security challenges. If you are passionate about DevOps, cloud automation, and building secure, scalable infrastructure, we invite you to join us at Levo.ai and be part of a team that is redefining API security. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Solugenix is a leader in IT services, delivering cutting-edge technology solutions, exceptional talent, and managed services to global enterprises. With extensive expertise in highly regulated and complex industries, we are a trusted partner for integrating advanced technologies with streamlined processes. Our solutions drive growth, foster innovation, and ensure compliance—providing clients with reliability and a strong competitive edge. Recognized as a 2024 Top Workplace, Solugenix is proud of its inclusive culture and unwavering commitment to excellence. Our recent expansion, with new offices in the Dominican Republic, Jakarta, and the Philippines, underscores our growing global presence and ability to offer world-class technology solutions. Partnering with Solugenix means more than just business—it means having a dedicated ally focused on your success in today's fast-evolving digital world. Job Title: AWS Cloud Support Engineer – Mid Level Location: Hyderabad/Bangalore Position: Full Time Required Experience: 4+ Years Timings: 24/7 Rotational Shifts Key Responsibilities: Perform advanced troubleshooting for AWS services and cloud-based applications. Lead incident resolution and service request fulfilment, ensuring minimal downtime. Improve and maintain SOPs for AWS cloud operations. Monitor and manage AWS services such as EC2, EKS, S3, RDS, Elastic Load Balancing, Auto Scaling, and AWS Lambda. Manage and optimize AWS monitoring tools (CloudWatch, Splunk, Datadog). Implement AWS networking solutions, including VPC, security groups, and ACLs. Support infrastructure automation and deployment using Terraform, Ansible, or Kubernetes. Ensure AWS security best practices and compliance. Provide technical support and mentorship to junior team members. Required Skills & Qualifications: Strong expertise in AWS cloud services, including compute, storage, and networking. Experience managing AWS Identity and Access Management (IAM) roles and policies. Familiarity with ITSM tools like ServiceNow for incident tracking and reporting. Hands-on experience with AWS networking, security, and automation. Experience with at least three of the following: Kubernetes, Harness, Kafka, Jenkins, GitHub, Docker, Ansible, Terraform. Strong foundational skills in cybersecurity, CI/CD methodologies, and AWS cloud operations. Familiarity with ITSM tools, preferably ServiceNow. Excellent problem-solving and communication skills. Ability to work effectively in a remote, multi-cultural team environment. Fluent in English, both written and spoken. Bachelor's degree in computer science, Information Technology, or related field. Preferred Qualifications: Certifications such as AWS Certified Solutions Architect, AWS SysOps Administrator, AWS Developer, Kubernetes or ITIL is a plus. Proficiency in automation tools such as AWS CloudFormation, Ansible, Terraform, Kubernetes, and scripting languages like Python. Show more Show less

Posted 2 days ago

Apply

5.0 - 8.0 years

11 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

AWS DevOps Senior Engineer Who We Are: (www.credera.com) Credera, trading as TA Digital, is a global consulting firm that combines transformational consulting capabilities, deep industry knowledge, and AI and technology expertise to deliver valuable customer experiences and accelerated growth across a broad range of industries worldwide. Our one-of-a-kind global boutique approach means we provide our clients with tailored solutions unique to their organization that can scale due to our extensive footprint. As a values-led organization, our mission is to make an extraordinary impact on our clients, our people, and our community. We believe it is this approach that has allowed us to work with and transform the most influential brands and organizations in the world, from strategy through to execution. More information is available at www.credera.com. We are part of the OPMG Group of Companies, a division of Omnicom Group Inc. Location: Hyderabad / Bangalore / Chennai / Gurugram Work Mode: Hybrid (3 Days per week from Office) About the team you will join: We are an AWS Advance Consulting Partner and Microsoft Azure Gold Partner. Our team has extensive expertise in architecting go-to-market solutions and helps clients leverage Cloud to the best of its capabilities. Our Cloud Services Strategy and Consulting services help clients understand how cloud automation can achieve the business goals. Our DevOps & Cloud Solutions team helps clients build highly reliable, scalable and secure infrastructure with DevOps best practices. This team enhances Cloud capabilities by improving performance, security, dependability and management. Description Job Summary: As a DevOps senior Engineer, you will design, deploy, and manage advanced cloud-based solutions on AWS. This role requires an experience with a strong background in AWS cloud management, DevSecOps, and infrastructure as code (IaC). The ideal candidate will have at least 5 years of relevant experience and will be proficient in implementing and managing AWS networking components, security practices, and CI/CD pipelines. Excellent communication skills and the ability to build effective working relationships are essential, along with the ability lead a team of engineers Key Responsibilities: Provide technical tools, test environments, processes, and development support to the client team. Deploying, automating, maintaining, and managing AWS cloud-based production systems, to ensure the availability, performance, scalability, and security of production systems. Build and operate client production environments. Participate in our evolution toward infrastructure as code. Track usage and capacity plan for hardware and software on an annual basis Mentor junior team members in the construction of delivery systems using configuration management and horizontally scalable architectures. Participate in on-call escalation for client production systems. Participate in on-call rotation for DevTools systems. Required Skillset: 4+ years of experience as a DevOps engineer. Strong knowledge in fundamental DevOps tools such as Terraform, Jenkins, GoCD, GitHub, Nagios, Puppet, Chef, etc. Advanced knowledge and experience with Kubernetes, Amazon managed services and serverless frameworks. Advanced knowledge and experience with GitHub Actions. Experience managing Linux-based operating systems. Fantastic communication and documentation skills both written and verbal. Expertise in automation scripting Experience with one of the dynamic object-oriented programming languages like Groovy, Ruby or Python Detailed experience with AWS. Professional Attributes You Possess: Excellent communication skills. Making decisions and solving problems involving varied levels of complexity, ambiguity and risk. Providing a win-win environment to inspire, attract and develop the best Talents. Boosting teams and individuals potential to improve their engagement, performance and career experience. Questioning conventional approaches, exploring alternatives and responding to challenges with innovative solutions or services, using intuition, experimentation, expertise and fresh perspectives. Problem Solving Attitude and Passion to solve Customer Problems. Facilitating efficiency, speed and simplification in all actions. Adapting to market changes and competitive pressures and enables fast learning. Demonstrating passion for our business and constant hunger for outstanding performance.

Posted 2 days ago

Apply

10.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

About The Job Position Name - Senior Data & AI/ML Engineer – GCP Specialization Lead Minimum Experience - 10+ years Expected Date of Joining - Immediate Primary Skill GCP Services: BigQuery, Dataflow, Pub/Sub, Vertex AI ML Engineering: End-to-end ML pipelines using Vertex AI / Kubeflow Programming: Python & SQL MLOps: CI/CD for ML, Model deployment & monitoring Infrastructure-as-Code: Terraform Data Engineering: ETL/ELT, real-time & batch pipelines AI/ML Tools: TensorFlow, scikit-learn, XGBoos Secondary Skills GCP Certifications: Professional Data Engineer or ML Engineer Data Tools: Looker, Dataform, Data Catalog AI Governance: Model explainability, privacy, compliance (e.g., GDPR, fairness) GCP Partner Experience: Prior involvement in specialization journey or partner enablement Work Location - Remote What Makes Techjays An Inspiring Place To Work At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We are seeking a Senior Data & AI/ML Engineer with deep expertise in GCP, who will not only build intelligent and scalable data solutions but also champion our internal capability building and partner-level excellence. This is a high-impact role for a seasoned engineer who thrives in designing GCP-native AI/ML-enabled data platforms. You’ll play a dual role as a hands-on technical lead and a strategic enabler, helping drive our Google Cloud Data & AI/ML specialization track forward through successful implementations, reusable assets, and internal skill development. Preferred Qualification GCP Professional Certifications: Data Engineer or Machine Learning Engineer. Experience contributing to a GCP Partner specialization journey. Familiarity with Looker, Data Catalog, Dataform, or other GCP data ecosystem tools. Knowledge of data privacy, model explainability, and AI governance is a plus. Work Location: Remote Key Responsibilities Data & AI/ML Architecture Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage. Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines. Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry. Define and implement data governance, lineage, monitoring, and quality frameworks. Google Cloud Partner Enablement Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions. Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP. Contribute to building repeatable solution accelerators in Data & AI/ML. Work with the leadership team to align with Google Cloud Partner Program metrics. Team Development Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning. Organize and lead internal GCP AI/ML enablement sessions. Represent the company in Google partner ecosystem events, tech talks, and joint GTM engagements. What We Offer Best-in-class packages. Paid holidays and flexible time-off policies. Casual dress code and a flexible working environment. Opportunities for professional development in an engaging, fast-paced environment. Medical insurance covering self and family up to 4 lakhs per person. Diverse and multicultural work environment. Be part of an innovation-driven culture with ample support and resources to succeed. Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

CS DISCO is aimed at redefining the landscape of legal technology. Ultimately, we seek to enable lawyers to achieve more efficient, data-oriented, and fact-based legal outcomes. We started our journey as a company by focusing on the crucial 'discovery' phase of the legal process. One of our offerings, Ediscovery, focuses on enabling legal teams to achieve these outcomes with incredible efficiency. The massive data proliferation over the last decades has revealed the limitations of many legal technology products, impacting the efficiency of legal practices. At CS DISCO, we strive to provide solutions to the legal domain that are magical. To do so requires the processing of large volumes of data at petabyte scale, with high availability, while maximizing performance, minimizing operations cost, and also ensuring data security, privacy, and sovereignty. Our overarching mission is to create a unified technology platform for the practice of law. We envision a suite of products focused on delivering distinctly better legal outcomes with a minimum of human toil and cost. Our technology addresses the challenges of scale in data and enables legal teams to focus on the critical tasks that necessitate human legal judgment. With a trajectory that has already seen substantial disruption in this market, our approach is underscored by a lawyer-inspired interface and a cloud-enabled technology platform, aiming for exemplary performance and cost efficiency. Thoughtful product planning and design are ingrained in our “product first” business ethos and culture, aligning with the broader objective of enhancing the practice of law through technology. Your Impact You will be a key contributor and responsible for both hands-on technical work to improve existing systems as well as guiding long-term evolutions in this space. You’ll be successful by ensuring that the product and its backend systems meet current and future functional and scaling demands. Through your contributions towards both implementation and system design, you will play a pivotal role in maintaining and evolving our software systems to meet the growing demands of our business, ensuring high availability and high performance. What You'll Do Engage actively in coding, code reviews, and technical discussions, ensuring high-quality output. Lead the design, development, and maintenance of scalable, high-performance, easily modifiable distributed systems. Continuously enhance system performance, focusing on meeting customer needs using the best practices for designing scalable distributed systems. Share knowledge and mentor junior engineers, promoting a culture of technical excellence and continuous learning. Collaborate closely with cross-functional teams to translate business requirements into robust technical solutions. Who You Are 10+ years of relevant experience in backend engineering, with a focus on building large scale, highly responsive, fault tolerant services in SaaS and cloud-based applications. Strong experience with building highly reliable, highly responsive services backed by relational and non-relational data stores. Demonstrated expertise in designing, implementing, and maintaining (through operational observability) highly available, high-performance, distributed systems. Proven ability to deliver well-crafted, tested, and maintainable code solutions to complex technical challenges. Experience with multiple software stacks, have opinions and preferences, but are not tightly coupled to a specific stack. You’ve delivered cloud native software solutions (including designing, implementing, and operational excellence). Experience in implementing RESTful APIs for outward-facing services and using gRPC for efficient internal service-to-service communication. Even Better If You Have… Experience with designing, modifying, and operating multi-tenant systems. Familiarity with designing and developing from a security perspective with security best practices in system design and development. Demonstrated proficiency in multiple programming languages, including but not limited to Python and Kotlin/Java. Strategic level interaction with UI Developers. Some Of Our Technology Stack Cloud Provider: AWS Persistence: Sql Datastore, Elasticsearch, and others Container Orchestration: ECS, Kubernetes Transport: gRPC, GraphQL Persistence: Elasticsearch, DynamoDB, PostgresQL, Redis Event Bus: Kafka Languages / Frameworks: Kotlin / Netflix DGS, Python / Flask, .NET IaC: Terraform Perks of DISCO Open, inclusive, and fun environment Benefits, including medical and dental insurance Competitive salary plus discretionary bonus Opportunity to be a part of a startup that is revolutionizing the legal industry Growth opportunities throughout the company About DISCO DISCO provides a cloud-native, artificial intelligence-powered legal solution that simplifies ediscovery, legal document review and case management for enterprises, law firms, legal services providers and governments. Our scalable, integrated solution enables legal departments to easily collect, process and review enterprise data that is relevant or potentially relevant to legal matters. Are you ready to help us fulfill our mission to use technology to strengthen the rule of law? Join us! We are an equal opportunity employer and value diversity. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Show more Show less

Posted 2 days ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

CS DISCO is aimed at redefining the landscape of legal technology. Ultimately, we seek to enable lawyers to achieve more efficient, data-oriented, and fact-based legal outcomes. We started our journey as a company by focusing on the crucial 'discovery' phase of the legal process. One of our offerings, Ediscovery, focuses on enabling legal teams to achieve these outcomes with incredible efficiency. The massive data proliferation over the last decades has revealed the limitations of many legal technology products, impacting the efficiency of legal practices. At CS DISCO, we strive to provide solutions to the legal domain that are magical. To do so requires the processing of large volumes of data at petabyte scale, with high availability, while maximizing performance, minimizing operations cost, and also ensuring data security, privacy, and sovereignty. Our overarching mission is to create a unified technology platform for the practice of law. We envision a suite of products focused on delivering distinctly better legal outcomes with a minimum of human toil and cost. Our technology addresses the challenges of scale in data and enables legal teams to focus on the critical tasks that necessitate human legal judgment. With a trajectory that has already seen substantial disruption in this market, our approach is underscored by a lawyer-inspired interface and a cloud-enabled technology platform, aiming for exemplary performance and cost efficiency. Thoughtful product planning and design are ingrained in our “product first” business ethos and culture, aligning with the broader objective of enhancing the practice of law through technology. Your Impact You will be a key contributor and responsible for both hands-on technical work to improve existing systems as well as guiding long-term evolutions in this space. You’ll be successful by ensuring that the product and its backend systems meet current and future functional and scaling demands. Through your contributions towards both implementation and system design, you will play a pivotal role in maintaining and evolving our software systems to meet the growing demands of our business, ensuring high availability and high performance. What You'll Do Drive the definition and evolution of our architecture using Distributed Domain Driven Design practices. Provide project-embedded architecture consultation to promote best practices, design patterns and informed buy vs build decisions. Contribute to the prioritization of platform capability improvements across features and data platforms. Collaborate with stakeholders to build consensus when necessary, ensuring alignment on architectural decisions. Contribute leadership, understanding and lessons learned to the greater DISCO platform in these areas. Who You Are 15+ years of relevant experience in backend engineering, with a focus on building high volume distributed technical architectures in SaaS and cloud based applications Experience with “Big Data” technologies such as: Elasticearch, NoSQL Stores, Kafka, Columnar Databases, Graph Datastores Experience leveraging common infrastructure services like enterprise message bus platforms, configuration services, toggle management systems, and observability systems like logging and distributed tracing systems. Experience with domain driven design concepts and practices such as bounded contexts, event storming, specification by example, etc. Possess an understanding of how to design and develop from a security perspective. Experience with multiple software stacks, have opinions and preferences, and not be married to a specific stack. Possess the ability to design and communicate external and internal architectural perspectives of well-encapsulated systems using patterns such as architecture/design patterns and sequence diagrams. Even Better If You Have… Advanced degree in computer science, software engineering, or similar Experience in designing, modifying and operating multi-tenant systems. Experience with data query and manipulation language such as GraphQL Knowledge of API / Data Model Design and Implementation, including how to scale out, make highly available or map to storage systems. An understanding of how to identify, select and extend 3rd party components for operational leverage but do not constrain product and engineering creativity. Some Of Our Technology Stack Cloud Provider: AWS Persistence: Sql Datastore, Elasticsearch, and others Container Orchestration: ECS, Kubernetes Transport: gRPC, GraphQL Persistence: Elasticsearch, DynamoDB, PostgresQL, Redis Event Bus: Kafka Languages / Frameworks: Kotlin / Netflix DGS, Python / Flask, .NET IaC: Terraform Perks of DISCO Open, inclusive, and fun environment Benefits, including medical and dental insurance Competitive salary plus discretionary bonus Opportunity to be a part of a startup that is revolutionizing the legal industry Growth opportunities throughout the company About DISCO DISCO provides a cloud-native, artificial intelligence-powered legal solution that simplifies ediscovery, legal document review and case management for enterprises, law firms, legal services providers and governments. Our scalable, integrated solution enables legal departments to easily collect, process and review enterprise data that is relevant or potentially relevant to legal matters. Are you ready to help us fulfill our mission to use technology to strengthen the rule of law? Join us! We are an equal opportunity employer and value diversity. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll Also Be Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll Also Need 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Presidio, Where Teamwork and Innovation Shape the Future At Presidio, we’re at the forefront of a global technology revolution, transforming industries through cutting-edge digital solutions and next-generation AI. We empower businesses—and their customers—to achieve more through innovation, automation, and intelligent insights. The Role Presidio is looking for an Architect to design and implement complex systems and software architectures across multiple platforms. The ideal candidate will have extensive experience in systems architecture, software engineering, cloud technologies, and team leadership. You will be responsible for translating business requirements into scalable, maintainable technical solutions and guiding development teams through implementation. Responsibilities Include Design, plan, and manage cloud architectures leveraging AWS, Azure, and GCP, ensuring alignment with business objectives and industry best practices. Evaluate and recommend appropriate cloud services and emerging technologies to enhance system performance, scalability, and security. Lead the development and integration of software solutions using a variety of programming languages (Java, .NET, Python, Golang, etc.). Develop and maintain automated solutions for cloud provisioning, governance, and lifecycle management, utilizing Infrastructure as Code (IaC) tools such as Terraform and Ansible. Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver robust cloud-native solutions. Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver robust cloud-native solutions. Guide and mentor development teams, enforcing architectural standards, coding best practices, and technical excellence. Provide expert consultation to internal and external stakeholders, offering recommendations on cloud migration, modernization, and optimization strategies. Ensure compliance with security, regulatory, and cost management policies across cloud environments. Stay current with industry trends, emerging technologies, and best practices, proactively introducing innovations to the organization. Required Skills And Professional Experience 10+ years of experience in software architecture, including significant experience with cloud infrastructure and hyperscaler platforms (AWS, Azure, GCP). Deep expertise in at least one hyperscaler (AWS, Azure, or GCP), with working knowledge of the others. Strong programming skills in multiple languages (Java, C#, Node, JavaScript, .NET, Python, Golang, etc.). Experience with services/micro-services development and relational databases (Postgres, MySQL, Oracle, etc.) Expertise in open-source technologies and NoSQL/RDBMS such as Couchbase, Elasticsearch, RabbitMQ, MongoDB, Cassandra, Redis, etc. Excellent verbal and written communication skills. Knowledge in Project Management tools and Agile Methodologies. Certification in AWS or Azure is preferred. Your future at Presidio Joining Presidio means stepping into a culture of trailblazers—thinkers, builders, and collaborators—who push the boundaries of what’s possible. With our expertise in AI-driven analytics, cloud solutions, cybersecurity, and next-gen infrastructure, we enable businesses to stay ahead in an ever-evolving digital world. Here, your impact is real. Whether you're harnessing the power of Generative AI, architecting resilient digital ecosystems, or driving data-driven transformation, you’ll be part of a team that is shaping the future. Ready to innovate? Let’s redefine what’s next—together. About Presidio At Presidio, speed and quality meet technology and innovation. Presidio is a trusted ally for organizations across industries with a decades-long history of building traditional IT foundations and deep expertise in AI and automation, security, networking, digital transformation, and cloud computing. Presidio fills gaps, removes hurdles, optimizes costs, and reduces risk. Presidio’s expert technical team develops custom applications, provides managed services, enables actionable data insights and builds forward-thinking solutions that drive strategic outcomes for clients globally. For more information, visit www.presidio.com . Presidio is committed to hiring the most qualified candidates to join our amazing culture. We aim to attract and hire top talent from all backgrounds, including underrepresented and marginalized communities. We encourage women, people of color, people with disabilities, and veterans to apply for open roles at Presidio. Diversity of skills and thought is a key component to our business success. Recruitment Agencies, Please Note: Presidio does not accept unsolicited agency resumes/CVs. Do not forward resumes/CVs to our careers email address, Presidio employees or any other means. Presidio is not responsible for any fees related to unsolicited resumes/CVs. Show more Show less

Posted 2 days ago

Apply

6.0 years

0 Lacs

Greater Vadodara Area

On-site

Linkedin logo

Company Description Wiser Solutions is a suite of in-store and eCommerce intelligence and execution tools. We're on a mission to enable brands, retailers, and retail channel partners to gather intelligence and automate actions to optimize in-store and online pricing, marketing, and operations initiatives. Our Commerce Execution Suite is available globally. Job Description We are looking for a highly capable Senior Full Stack engineer to be a core contributor in developing our suite of product offerings. If you love working on complex problems, and writing clean code, you will love this role. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze semi-structured data from different sources (20 million+ products from 500+ websites into our catalog of 500 million+ products). We help our customers discover new patterns in their data that can be leveraged so that they can become more competitive and increase their revenue. Essential Functions Think like our customers – you will work with product and engineering leaders to define intuitive solutions Designing customer-facing UI and back-end services for various business processes. Developing high-performance applications by writing testable, reusable, and efficient code. Implementing effective security protocols, data protection measures, and storage solutions. Improve the quality of our solutions – you will hold yourself and your team members accountable to writing high quality, well-designed, maintainable software Own your work – you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table – some of our best innovations originate within the team Guiding and mentoring others on the team Technologies We Use Languages: NodeJS/NestJS/Typescript, SQL, React/Redux, GraphQL Infrastructure: AWS, Docker, Kubernetes, Terraform, GitHub Actions, ArgoCD Databases: Postgres, MongoDB, Redis, Elasticsearch, Trino, Iceberg Streaming and Queuing: Kafka, NATS, Keda Qualifications 6+ years of professional software engineering/development experience. Proficiency with architecting and delivering solutions within a distributed software platform Full stack engineering experience, including front end frameworks (React/Typescript, Redux) and backend technologies such as NodeJS/NestJS/Typescript, GraphQL Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Proven ability to work and effectively, prioritize and organize your work in a highly dynamic environment Proven track record of working in highly distributed event driven systems. Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Trino, etc.) Solid understanding of Data Pipeline and Workflow Automation – orchestration tools, scheduling and monitoring Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of Data Lakes, Data Warehouses, and modeling practices (Data Vault, etc.) and experience leveraging data lake solutions (e.g. AWS Glue, DBT, Trino, Iceberg, etc.) . Ability to clean, transform, and aggregate data using SQL or scripting languages Ability to design and estimate tasks, coordinate work with other team members during iteration planning Solid understanding of AWS, Linux and infrastructure concepts Track record of lifting and challenging teammates to higher levels of achievement Experience measuring, driving and improving the software engineering process Good testing habits and strong eye for quality. Outstanding organizational, communication, and relationship building skills conducive to driving consensus; able to work well in a cross-functional environment Experience working in an agile team environment Ownership – feel a sense of personal accountability/responsibility to drive execution from start to finish. Drive adoption of Wiser's Product Delivery organization principles across the department. Bonus Points Experience with CQRS Experience with Domain Driven Design Experience with C4 modeling Experience working within a retail or ecommerce environment Experience with AI Coding Agents (Windsurf, Cursor, Claude, ChatGPT, etc) – Prompt Engineering Why Join Wiser Solutions? Work on an industry-leading product trusted by top retailers and brands. Be at the forefront of pricing intelligence and data-driven decision-making. A collaborative, fast-paced environment where your impact is tangible. Competitive compensation, benefits, and career growth opportunities. Additional Information EEO STATEMENT Wiser Solutions, Inc. is an Equal Opportunity Employer and prohibits Discrimination, Harassment, and Retaliation of any kind. Wiser Solutions, Inc. is committed to the principle of equal employment opportunity for all employees and applicants, providing a work environment free of discrimination, harassment, and retaliation. All employment decisions at Wiser Solutions, Inc. are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, national origin, family or parental status, disability, genetics, age, sexual orientation, veteran status, or any other status protected by the state, federal, or local law. Wiser Solutions, Inc. will not tolerate discrimination, harassment, or retaliation based on any of these characteristics. Show more Show less

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Role We are looking for intermediate full-stack software engineers who are passionate about solving business problems through innovation and engineering practices. This role will be responsible for writing code, pairing with other developers as appropriate, decomposing acceptance criteria to understand team backlog deliverables, complexities, and risk, while working as a strong contributor on an agile team. From a technical standpoint, the Software Engineer has full-stack coding and implementation responsibilities and adheres to best practice principles, including modern cloud-based software development, agile and scrum, code quality, and tool usage. The Software Engineer works to understand and influence software architecture, while contributing to Citi’s and GFT’s technical user base. Responsibilities Apply depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner continuously with stakeholders on a regular basis Develop and engineer solutions within an Agile software delivery team, working to collaboratively deliver sprint goals, write code, and participate in the broader Citi technical community and team-level Agile and Scrum processes. Contribute to the design, documentation, and development of world-class enterprise applications leveraging the latest technologies and software design patterns. Leverage technical knowledge of concepts and procedures within own area and basic knowledge of other areas to resolve issues, as necessary. Follow and contribute to defining technical and team standards. Collaborate technical leadership to achieve established goals, in line with our broader technical strategy. Required Qualifications 5 to 10 years of experience as a Software Engineer/Developer using Java Multiple years of experience with software engineering best practices (unit testing, automation, design patterns, peer review, etc.). Clear understanding of Data Structures and Object Oriented Principles using Java Multiple years of experience on Service Oriented and MicroServices architectures, including REST and GraphQL implementations. Exposure to front-end technologies (Angular, Javascript, Typescript) Exposure to Cloud-native development and Container Orchestration tools (Serverless, Docker, Kubernetes, OpenShift, etc.) Multiple years of experience with frameworks like Spring Boot, Quarkus, Micronaut, or Vert.x Exposure to Continuous Integration and Continuous Delivery (CI/CD) pipelines, either on-premise or public cloud (i.e., Tekton, Harness, CircleCI, Cloudbees Jenkins, etc.). Multiple years of experience with agile and iterative software delivery (SCRUM, Kanban) Exposure to Database technologies (RDBMS, NoSQL, Oracle, MySQL, Mongo) Exposure to event-driven design and architecture (i.e., Kafka, Spark Flink, RabbitMQ, etc.) B.Tech/B.Engg degree or equivalent work experience Preferred Qualifications Exposure to architecture experience in building horizontally scalable, highly available, highly resilient, and low latency applications. Exposure to Cloud infrastructure both on-premise and public cloud (i.e., OpenShift, AWS, etc.). Exposure to API Management tools Exposure to Infrastructure as Code tools (i.e., Terraform, Cloudformation, etc.) Exposure to Security, Observability, and Monitoring (i.e., Grafana Prometheus, Splunk, ELK, CloudWatch, etc.) Experience mentoring junior developers Exposure to database concepts (RDBMS, NoSQL) and web-based technologies (Angular/React) is a plus ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 2 days ago

Apply

8.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Solugenix is a leader in IT services, delivering cutting-edge technology solutions, exceptional talent, and managed services to global enterprises. With extensive expertise in highly regulated and complex industries, we are a trusted partner for integrating advanced technologies with streamlined processes. Our solutions drive growth, foster innovation, and ensure compliance—providing clients with reliability and a strong competitive edge. Recognized as a 2024 Top Workplace, Solugenix is proud of its inclusive culture and unwavering commitment to excellence. Our recent expansion, with new offices in the Dominican Republic, Jakarta, and the Philippines, underscores our growing global presence and ability to offer world-class technology solutions. Partnering with Solugenix means more than just business—it means having a dedicated ally focused on your success in today's fast-evolving digital world. Job Title: Lead AWS Cloud Support Engineer Experience: 8-10 Years Location: Hyderabad/Bengaluru (Work from Office – SODC) Work Timings: 24*7 Rotational shifts Leadership & Strategic Responsibilities: Lead and mentor AWS cloud support teams, ensuring efficient issue resolution and best practices adherence. Define, implement, and continuously refine cloud operations strategy to improve system reliability and performance. Act as the primary escalation point for high-impact incidents, ensuring rapid resolution and minimal downtime. Collaborate with senior management to align AWS infrastructure strategies with business goals. Develop training plans and knowledge-sharing sessions to upskill the team on AWS best practices, automation, and security. Establish and maintain strong governance processes to enforce cloud security and compliance policies. Ensure proper documentation of cloud environments, standard operating procedures (SOPs), and best practices. Drive innovation by exploring new AWS services and technologies to enhance cloud operations. Technical & Operational Responsibilities: Oversee the design, deployment, and optimization of AWS cloud infrastructure, ensuring high availability and scalability. Implement and enforce AWS security policies, IAM configurations, and network access controls. Lead automation and DevOps initiatives, leveraging tools such as Terraform, Ansible, Kubernetes, and CI/CD pipelines. Manage cloud monitoring, logging, and incident response using AWS CloudWatch, Splunk, and Datadog. Collaborate with development, DevOps, and security teams to ensure AWS best practices are followed. Ensure cost optimization and budget control for cloud resources, identifying opportunities for cost savings. Provide guidance on AWS architecture, infrastructure scaling, and performance tuning. Required Skills & Qualifications: Strong leadership experience, with a proven track record of managing and mentoring cloud support teams. Extensive hands-on experience with AWS cloud operations, including infrastructure management, networking, and security. Expertise in AWS services such as EC2, EKS, S3, RDS, Elastic Load Balancing, Auto Scaling, and AWS Lambda. Deep understanding of AWS networking principles, including VPC, security groups, ACLs, and IAM policies. Experience in incident management, including root cause analysis and post-mortem documentation. Strong knowledge of AWS best practices, governance, and compliance standards. Proficiency in monitoring tools such as AWS CloudWatch, Splunk, and Datadog. Hands-on experience in automation and infrastructure-as-code (IaC) using Terraform, Ansible, and Kubernetes. Strong understanding of CI/CD pipelines, DevOps methodologies, and cloud security principles. Excellent problem-solving, analytical thinking, and decision-making skills. Ability to work effectively in a multi-cultural team environment. Fluent in English, both written and spoken. Bachelor's degree in Computer Science, Information Technology, or related field. Preferred Qualifications: AWS Professional or Specialty certifications (e.g., AWS Certified Solutions Architect – Professional, AWS DevOps Engineer – Professional). Experience in managing multi-cloud environments (Azure, GCP) is a plus. Advanced scripting skills in Python, Shell, or PowerShell for automation. Show more Show less

Posted 2 days ago

Apply

6.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Solugenix is a leader in IT services, delivering cutting-edge technology solutions, exceptional talent, and managed services to global enterprises. With extensive expertise in highly regulated and complex industries, we are a trusted partner for integrating advanced technologies with streamlined processes. Our solutions drive growth, foster innovation, and ensure compliance—providing clients with reliability and a strong competitive edge. Recognized as a 2024 Top Workplace, Solugenix is proud of its inclusive culture and unwavering commitment to excellence. Our recent expansion, with new offices in the Dominican Republic, Jakarta, and the Philippines, underscores our growing global presence and ability to offer world-class technology solutions. Partnering with Solugenix means more than just business—it means having a dedicated ally focused on your success in today's fast-evolving digital world. Job Title: Senior AWS Cloud Support Engineer Experience: 6-8 Years Work Location: Hyderabad/Bengaluru (Work from Office) SODC Shift Timings: 24x7 Key Responsibilities: Monitor and respond to AWS cloud-based application incidents and service requests. Provide Level 1 and Level 2 support, escalating complex issues to appropriate product teams. Execute Standard Operating Procedures (SOPs) and runbooks for AWS environments to ensure quick resolution of incidents. Design and optimize cloud monitoring strategies using AWS CloudWatch, Splunk, and Datadog. Work closely with security and DevOps teams to enhance AWS environments. Provide advanced support and optimization for AWS services, including EC2, EKS, S3, RDS, Elastic Load Balancing, Auto Scaling, and AWS Lambda. Implement AWS networking solutions, including VPC setup, security groups, and network ACLs. Ensure compliance with AWS best practices and security standards. Assist in cloud architecture reviews and suggest optimizations for operational efficiency. Drive continuous improvement initiatives for AWS performance, security, and automation. Mentor and guide junior analysts in AWS best practices and troubleshooting. Required Skills & Qualifications: Strong experience with AWS native services and a solid understanding of cloud architectures. Expertise in core AWS services such as EC2, S3, RDS, Elastic Load Balancing, Auto Scaling, and AWS Lambda. Ability to configure AWS Identity and Access Management (IAM) for secure access management. Strong knowledge of AWS networking principles, including VPC setup, security groups, and network ACLs (Access Control Lists). Proficiency in ITSM tools like ServiceNow for incident tracking and SLA management. Hands-on experience with AWS monitoring tools such as CloudWatch, Splunk, and Datadog. Strong foundational skills in CI/CD methodologies, cybersecurity, and AWS cloud operations. Experience in at least three of the following: Kubernetes, Harness, Kafka, Jenkins, GitHub, Docker, Ansible, Terraform. Strong problem-solving and decision-making skills. Ability to work in a multi-cultural team environment. Fluent in English, both written and spoken. Bachelor's degree in Computer Science, Information Technology, or a related field. Preferred Qualifications: AWS Professional Certifications, such as AWS Certified Solutions Architect – Professional or AWS DevOps Engineer – Professional. Advanced knowledge of CI/CD pipelines, cloud security, and automation frameworks. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, deliver digital marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realise their financial goals and help them to save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agrifinance, insurance, and many more industry segments. We invest in talented people and new advanced technologies to unlock the power of data and to innovate. A FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 23,300 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com Job Description What you'll do Architect, build, document, and maintain Cloud standards and processes Lead projects and new application implementations Create new Terraform architecture and modules to provision AWS resources Create, manage, and administrate Kubernetes running on EKS Create and modify Jenkins pipelines to support CI and automation Work with Software Development teams to write and tune their application Helm charts for EKS Performance Engineering, load testing, hotspot isolation, and remediation Guide teams on best practices in the cloud POC new solutions and production in the cloud Configure APM, SLO, SLA and alerting through Dynatrace Configure log metrics and analysis through Splunk Build and manage CI deployment process for all environments Support and allow teams to migrate from on-prem environments into AWS You will be reporting to a Senior Manager You will have to WFO 2 days a week as its Hybrid working from Hyderabad Required Soft Experience Experience leading application migrations into the cloud according to best practices, standards and cloud-native architecture. You understand the challenges and trade-offs to be made when building and deploying systems to production. Expertise in working with container deployment and orchestration technologies at scale with strong knowledge of the fundamentals to include service discovery, deployments, monitoring, scheduling, load balancing. Experience with identifying performance bottlenecks, identifying anomalous system behavior, and determining the root cause of incidents. 5+ years of experience working with APM and log aggregation tools as well as configuring the integrations and monitoring needed to leverage these tools. Interest in designing, analyzing, and troubleshooting large-scale distributed systems. Qualifications Required Technical Experience 8+ years total experience required 5+ years expert level experience Terraform 5+ years expert level experience with AWS services EC2, ASG, SG, ALB/NLB/WAF, ACL, Routing, Route53, Express Connect/Transit Gateway, EC2 Image Builder, EKS, ECS, ECR, Lambda 5+ years experienced writing Jenkins files and Jenkins Shared Libraries 5+ years expert level with EKS creation and administration 5+ years expert level with Kubernetes application deployment and management Experienced writing and maintaining custom application Helm charts and Helm template libraries Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning; Great Place To Work™ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here Show more Show less

Posted 2 days ago

Apply

6.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Solugenix is a leader in IT services, delivering cutting-edge technology solutions, exceptional talent, and managed services to global enterprises. With extensive expertise in highly regulated and complex industries, we are a trusted partner for integrating advanced technologies with streamlined processes. Our solutions drive growth, foster innovation, and ensure compliance—providing clients with reliability and a strong competitive edge. Recognized as a 2024 Top Workplace, Solugenix is proud of its inclusive culture and unwavering commitment to excellence. Our recent expansion, with new offices in the Dominican Republic, Jakarta, and the Philippines, underscores our growing global presence and ability to offer world-class technology solutions. Partnering with Solugenix means more than just business—it means having a dedicated ally focused on your success in today's fast-evolving digital world. Job Title: AWS Support Engineer Job Location: Hyderabad/Bangalore Experiences: 6-14 Years Primary Skills: AWS, Terraform, CI/CD, Scripting Required Skills: Strong expertise in AWS cloud services, including compute, storage, and networking. Experience managing AWS Identity and Access Management (IAM) roles and policies. Experience in automation and infrastructure-as-code (laC) using Terraform/Ansible, and Kubernetes. Experience in CI/CD pipelines, DevOps methodologies, and cloud security principles. Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

The Database Engineer will be actively involved in the evaluation, review, and management of databases. You will be part of a team who supports a range of Applications and databases. You should be well versed in database administration which includes installation, performance tuning and troubleshooting. A strong candidate will be able to rapidly troubleshoot complex technical problems under pressure, implement solutions that are scalable, while managing multiple customer groups. What You Will Do Support large-scale enterprise data solutions with a focus on high availability, low latency and scalability. Provide documentation and automation capabilities for Disaster Recovery as part of application deployment. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Knowledge of the configuration of monitoring solutions and the creation of dashboards (DPA, DataDog, Big Panda, Prometheus, Grafana, Log Analytics, Chao Search) What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in database administration, system administration , performance tuning and automation. 1+ years of experience developing and/or administering software in public cloud Experience in managing Traditional databases like SQLServer/Oracle/Postgres/MySQL and providing 24*7 Support. Experience in implementing and managing Infrastructure as Code (e.g. Terraform, Python, Chef) and source code repository (GitHub). Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience in designing and building production data pipelines from data ingestion to consumption within a hybrid big data architecture, using Cloud Native GCP, Java, Python, Scala, SQL etc. Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Automation - Uses knowledge of best practices in coding to build pipelines for build, test and deployment of processes/components; Understand technology trends and use knowledge to identify factors that can be used to automate system/process deployments Data / Database Management - Uses knowledge of Database operations and applies engineering skills to improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies programs/scripts and integrated software services; Applies industry best standards and tools to achieve a well-engineered result. Operational Excellence - Prioritizes and organizes own work; Monitors and measures systems against key metrics to ensure availability of systems; Identifies new ways of working to make processes run smoother and faster Technical Communication/Presentation - Explains technical information and the impacts to stakeholders and articulates the case for action; Demonstrates strong written and verbal communication skills Troubleshooting - Applies a methodical approach to routine issue definition and resolution; Monitors actions to investigate and resolve problems in systems, processes and services; Determines problem fixes/remedies. Assists with the implementation of agreed remedies and preventative measures; Analyzes patterns and trends Show more Show less

Posted 2 days ago

Apply

3.0 - 6.0 years

9 - 19 Lacs

Pimpri-Chinchwad, Pune

Work from Office

Naukri logo

We are seeking a motivated and skilled DevOps Engineer to help ensure the excellence of our product suite. We are seeking two experienced DevOps Engineers with 6-9 years of hands-on experience. Youll play a crucial role in managing and optimizing our infrastructure, leveraging AWS, Terraform, and Ansible. The ideal candidates will have a solid background in either Development or DevOps and are committed to maintaining hands-on expertise. Develop, deploy, and manage scalable cloud infrastructure on AWS. Implement Infrastructure as Code (IaC) using Terraform and manage configurations with Ansible. Automate CI/CD pipelines to improve deployment efficiency and minimize downtime. Monitor system health, troubleshoot issues, and optimize performance. Hands-on experience with AWS, Terraform, and Ansible is mandatory. Solid understanding of DevOps principles, best practices, and automation techniques. Strong background in either Development or DevOps, with a proven ability to troubleshoot and solve technical issues. Strong scripting skills in Python, Bash, or similar languages. Experience with containerization technologies like Docker and Kubernetes.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Location: Gurgaon, India (On‑site/Hybrid, Full‑time) Why Join Us? We’re a fast‑growing health‑tech company transforming Revenue Cycle Management (RCM) for hospitals, clinics, and physician groups. Our cloud‑native platform simplifies complex billing and claims workflows so providers can focus on patient care—not paperwork. As a Senior DevOps Engineer, you’ll be the architect behind the highly available, secure, and scalable infrastructure that keeps those mission‑critical systems running smoothly. What You’ll Do Own the Cloud Infrastructure Design and automate Azure environments with Terraform/ARM, delivering self‑service, repeatable deployments Build resilient network topologies and security controls that meet HIPAA & HITRUST standards Tune performance and cost—because every saved rupee goes back into innovation Ship Code Faster & Safer Create end‑to‑end CI/CD pipelines in Jenkins or GitLab that cut release time from hours to minutes Embed automated tests, quality gates, and blue‑green / canary strategies to achieve zero‑downtime releases Containerize microservices with Docker and orchestrate them with Kubernetes Keep the Lights On Roll out observability stacks (Azure Monitor, Log Analytics, Application Insights) with actionable dashboards and alerts Author incident‑response playbooks and join a low‑noise on‑call rotation Conduct regular security scans and vulnerability assessments—security is everyone’s job here Automate Everything Script in Bash, PowerShell, or Python to eliminate toil and empower developers with self‑service tools Advocate for Infrastructure‑as‑Code and GitOps best practices across teams What You Bring 5+ years in DevOps/SRE roles with deep Azure expertise Hands‑on mastery of Terraform or ARM Templates, Docker, Kubernetes, and CI/CD tooling Strong scripting chops (Python, Bash, PowerShell) Solid understanding of networking, IAM, and security hardening Bonus points for: healthcare/RCM experience, Azure certifications (AZ‑400, AZ‑104), database know‑how (SQL Server, MongoDB), and familiarity with microservices and API gateways Soft Skills We Value Relentless problem solver who thrives in high‑stakes production environments Clear communicator—able to translate “yak‑shaving” tech talk into business value for non‑technical stakeholders Collaborative team player who mentors others and welcomes feedback Self‑starter who can juggle multiple priorities and still hit aggressive deadlines Perks & Benefits Comprehensive medical, dental, and vision coverage for you and your family Annual learning budget for conferences, certifications, and courses—grow on our dime Performance bonuses tied to team and company milestones Flexible working hours and generous leave policy Latest MacBook Pro or high‑end Windows laptop—your choice On‑site wellness programs and monthly team‑building events Powered by JazzHR nn4sn5A5AR Show more Show less

Posted 2 days ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Job Description : We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Lead Technical Architect (Strategy & Optimization Data Lake & Analytics) Experience: 10+ years Location: Onsite (Noida) Reports To: Client Stakeholders / Delivery Head Responsibilities Manage Project Delivery, scope, timelines, budget, resource allocation, and risk mitigation. Develop and maintain robust data ingestion pipelines (batch, streaming, API). Provide architectural inputs during incident escalations and act as final authority for RCA documentation and closure. of ADF, Power BI, and Databricks. Define and enforce data governance, metadata, and quality standards across zones. Monitor performance, optimize data formats (e.g., Parquet), and tune for cost-efficiency. Tune query performance for Databricks and Power BI datasets using optimization techniques (e.g. caching, BI Engine, materialized views). Lead and mentor a team of data engineers, fostering skills in Azure services and DevOps. Guide schema designs for new datasets and integrations aligned with Diageo’s analytics strategy. Coordinate cross-functional stakeholders (security, DevOps, business) for aligned execution. Oversee incident and change management with SLA adherence and continuous improvement. Serve as the governance owner for SLA compliance, IAM policies, encryption standards, and data retention strategies. Ensure compliance with policies (RBAC, ACLs, encryption) and regulatory audits. Initial data collection for RCA Report project status, KPIs, and business value to senior leadership. Lead monthly and quarterly reviews, presenting insights, improvements, and roadmap alignment to Diageo stakeholders. Required Skills Strong architecture-level expertise in Azure Data Platform (ADLS, ADF, Databricks, Synapse, Power BI). Deep understanding of data lake zone structuring, data lineage, metadata governance, and compliance (e.g., GDPR, ISO). Expert in Spark, PySpark, SQL, JSON, and automation tooling (ARM, Bicep, Terraform optional). Capable of aligning technical designs with business KPIs and change control frameworks. Excellent stakeholder communication, team mentoring, and leadership capabilities. Show more Show less

Posted 2 days ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

JOB_POSTING-3-71493-1 Job Description Role Title : AVP, Enterprise Logging & Observability (L11) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Splunk is Synchrony's enterprise logging solution. Splunk searches and indexes log files and helps derive insights from the data. The primary goal is, to ingests massive datasets from disparate sources and employs advanced analytics to automate operations and improve data analysis. It also offers predictive analytics and unified monitoring for applications, services and infrastructure. There are many applications that are forwarding data to the Splunk logging solution. Splunk team including Engineering, Development, Operations, Onboarding, Monitoring maintain Splunk and provide solutions to teams across Synchrony. Role Summary/Purpose The role AVP, Enterprise Logging & Observability is a key leadership role responsible for driving the strategic vision, roadmap, and development of the organization’s centralized logging and observability platform. This role supports multiple enterprise initiatives including applications, security monitoring, compliance reporting, operational insights, and platform health tracking. This role lead platform development using Agile methodology, manage stakeholder priorities, ensure logging standards across applications and infrastructure, and support security initiatives. This position bridges the gap between technology teams, applications, platforms, cloud, cybersecurity, infrastructure, DevOps, Governance audit, risk teams and business partners, owning and evolving the logging ecosystem to support real-time insights, compliance monitoring, and operational excellence. Key Responsibilities Splunk Development & Platform Management Lead and coordinate development activities, ingestion pipeline enhancements, onboarding frameworks, and alerting solutions. Collaborate with engineering, operations, and Splunk admins to ensure scalability, performance, and reliability of the platform. Establish governance controls for source naming, indexing strategies, retention, access controls, and audit readiness. Splunk ITSI Implementation & Management - Develop and configure ITSI services, entities, and correlation searches. Implement notable events aggregation policies and automate response actions. Fine-tune ITSI performance by optimizing data models, summary indexing, and saved searches. Help identify patterns and anomalies in logs and metrics. Develop ML models for anomaly detection, capacity planning, and predictive analytics. Utilize Splunk MLTK to build and train models for IT operations monitoring. Security & Compliance Enablement Partner with InfoSec, Risk, and Compliance to align logging practices with regulations (e.g., PCI-DSS, GDPR, RBI). Enable visibility for encryption events, access anomalies, secrets management, and audit trails. Support security control mapping and automation through observability. Stakeholder Engagement Act as a strategic advisor and point of contact for business units, application, infrastructure, security stakeholders and business teams leveraging Splunk. Conduct stakeholder workshops, backlog grooming, and sprint reviews to ensure alignment. Maintain clear and timely communications across all levels of the organization. Process & Governance Drive logging and observability governance standards, including naming conventions, access controls, and data retention policies. Lead initiatives for process improvement in log ingestion, normalization, and compliance readiness. Ensure alignment with enterprise architecture and data classification models. Lead improvements in logging onboarding lifecycle time, automation pipelines, and selfservice ingestion tools. Mentor junior team members and guide engineering teams on secure, standardized logging practices. Required Skills/Knowledge Bachelor's degree with Minimum of 6+ years of experience in Technology ,or in lieu of a degree 8+ years of Experience in Technology Minimum of 3+ years of experience in leading development team or equivalent role in observability, logging, or security platforms. Splunk Subject Matter Expert (SME) Strong hands-on understanding of Splunk architecture, pipelines, dashboards, and alerting, data ingestion, search optimization, and enterprise-scale operations. Experience supporting security use cases, encryption visibility, secrets management, and compliance logging. Splunk Development & Platform Management, Security & Compliance Enablement, Stakeholder Engagement & Process & Governance Experience with Splunk Premium Apps - ITSI and Enterprise Security (ES) minimally Experience with Data Streaming Platforms & tools like Cribl, Splunk Edge Processor. Proven ability to work in Agile environments using tools such as JIRA or JIRA Align. Strong communication, leadership, and stakeholder management skills. Familiarity with security, risk, and compliance standards relevant to BFSI. Proven experience leading product development teams and managing cross-functional initiatives using Agile methods. Strong knowledge and hands-on experience with Splunk Enterprise/Splunk Cloud. Design and implement Splunk ITSI solutions for proactive monitoring and service health tracking. Develop KPIs, Services, Glass Tables, Entities, Deep Dives, and Notable Events to improve service reliability for users across the firm Develop scripts (python, JavaScript, etc.) as needed in support of data collection or integration Develop new applications leveraging Splunk’s analytic and Machine Learning tools to maximize performance, availability and security improving business insight and operations. Support senior engineers in analyzing system issues and performing root cause analysis (RCA). Desired Skills/Knowledge Deep knowledge of Splunk development, data ingestion, search optimization, alerting, dashboarding, and enterprise-scale operations. Exposure to SIEM integration, security orchestration, or SOAR platforms. Knowledge of cloud-native observability (e.g. AWS/GCP/Azure logging). Experience in BFSI or regulated industries with high-volume data handling. Familiarity with CI/CD pipelines, DevSecOps integration, and cloud-native logging. Working knowledge of scripting or automation (e.g., Python, Terraform, Ansible) for observability tooling. Splunk certifications (Power User, Admin, Architect, or equivalent) will be an advantage . Awareness of data classification, retention, and masking/anonymization strategies. Awareness of integration between Splunk and ITSM or incident management tools (e.g., ServiceNow, PagerDuty) Experience with Version Control tools – Git, Bitbucket Eligibility Criteria Bachelor's degree with Minimum of 6+ years of experience in Technology ,or in lieu of a degree 8+ years of Experience in Technology Minimum of 3+ years of experience in leading development team or equivalent role in observability, logging, or security platforms. Demonstrated success in managing large-scale logging platforms in regulated environments. Excellent communication, leadership, and cross-functional collaboration skills. Experience with scripting languages such as Python, Bash, or PowerShell for automation and integration purposes. Prior experience in large-scale, security-driven logging or observability platform development. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and interpersonal skills to interact effectively with team members and stakeholders. Knowledge of IT Service Management (ITSM) and monitoring tools. Knowledge of other data analytics tools or platforms is a plus. WORK TIMINGS : 01:00 PM to 10:00 PM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L09+ Employees can apply. Level / Grade : 11 Job Family Group Information Technology Show more Show less

Posted 2 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies