Jobs
Interviews

17543 Terraform Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Purpose The role of a Service Delivery Manager (SDM) in the Global Cloud Excellence Team is to ensure the smooth and efficient delivery of services to clients or customers. SDM is responsible for managing the overall service delivery process, maintaining strong customer relationships, and ensuring that service level agreements (SLAs) are met. You will support as an SDM to lead our 24/7 Cloud Operations Team. The ideal candidate will be responsible for overseeing the daily operations of our cloud infrastructure, software application and platform, ensuring high availability, performance, and security. This role requires strong team management skills, technical expertise in cloud technologies, and a commitment to operational excellence. You will act as the bridge between development and operations by governing implementation of continuous integration and continuous deployment (CI/CD) pipelines, optimizing cloud infrastructure, and enhancing system performance and security towards achieving larger organizational objectives to facilitate seamless collaboration between development and operations teams to enhance the speed and quality of software delivery and its operations. Reporting Manager: Head of ZDP India This a manager role Roles & Responsibilities Team Leadership: Manage and mentor a team of cloud operations engineers and support staff. Foster a culture of collaboration, continuous improvement, and accountability within the team. Operational Oversight: Ensure the 24/7 availability of cloud services, platform and infrastructure. Monitor system performance and implement proactive measures to prevent downtime. Develop and enforce operational policies and procedures to enhance service delivery. Incident Management: Lead incident response efforts, ensuring timely resolution of issues and minimizing impact on services. Conduct post-incident reviews to identify root causes and implement corrective actions. Enable ITIL Process Capacity Planning: Analyze current and future capacity needs to ensure optimal resource allocation. Collaborate with operations teams to plan and execute cloud infrastructure upgrades and expansions. Performance Metrics: Define and track key performance indicators (KPIs) for cloud operations. Prepare regular reports for senior management on operational performance and service levels. SLA Management: Ensures compliance with the agreed SLA Determines demands for IT services 1st escalation instance for Customer regarding Service Operation ensures compliance and continuous improvement of the agreed processes creates the monthly service level reports with the status of the supported services and agreed KPIs Conducting regular service review meetings with the customer Collaboration: Work closely with development, operations team, security, and product teams to align operations with business objectives. Participate in cross-functional projects to improve overall service delivery and customer satisfaction . Budget Management: Assist in the development and management of the operations budget. Identify cost-saving opportunities while maintaining service quality. Qualifications & Work Experience: Education: Bachelor’s degree in computer science, Information Technology, or a related field. Experience: 8 -12 years of experience in cloud operations, IT operations, or a related field. Proven experience in managing 24/7 operations teams. Willing to provide on-call support as and when needed. Technical Skills: Strong knowledge of cloud platforms (e.g., Azure, AWS and Google Cloud). Familiarity with infrastructure as code (IaC) tools and practices (e.g., Terraform, Bicep, ARM etc). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Datadog etc). Experience with Ticketing tools (e.g., ServiceNow, JIRA, ADO etc) Soft Skills: Excellent team management skills. A strong focus on customers and results Strong problem-solving abilities and attention to detail. Effective communication skills, both verbal and written. ZEISS in India ZEISS in India is headquartered in Bengaluru and present in the fields of Industrial Quality Solutions, Research Microscopy Solutions, Medical Technology, Vision Care and Sports & Cine Optics. ZEISS India has 3 production facilities, R&D center, Global IT services and about 40 Sales & Service offices in almost all Tier I and Tier II cities in India. With 2200+ employees and continued investments over 25 years in India, ZEISS’ success story in India is continuing at a rapid pace. Further information at ZEISS India (https://www.zeiss.co.in/corporate/home.html)

Posted 6 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a visionary AI Architect to lead the design and integration of cutting-edge AI systems, including Generative AI , Large Language Models (LLMs) , multi-agent orchestration , and retrieval-augmented generation (RAG) frameworks. This role demands a strong technical foundation in machine learning, deep learning, and AI infrastructure, along with hands-on experience in building scalable, production-grade AI systems on the cloud. The ideal candidate combines architectural leadership with hands-on proficiency in modern AI frameworks, and can translate complex business goals into innovative, AI-driven technical solutions. Primary Stack & Tools: Languages : Python, SQL, Bash ML/AI Frameworks : PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers GenAI & LLM Tooling : OpenAI APIs, LangChain, LlamaIndex, Cohere, Claude, Azure OpenAI Agentic & Multi-Agent Frameworks : LangGraph, CrewAI, Agno, AutoGen Search & Retrieval : FAISS, Pinecone, Weaviate, Elasticsearch Cloud Platforms : AWS, GCP, Azure (preferred: Vertex AI, SageMaker, Bedrock) MLOps & DevOps : MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines, Terraform, FAST API Data Tools : Snowflake, BigQuery, Spark, Airflow Key Responsibilities: Architect scalable and secure AI systems leveraging LLMs , GenAI , and multi-agent frameworks to support diverse enterprise use cases (e.g., automation, personalization, intelligent search). Design and oversee implementation of retrieval-augmented generation (RAG) pipelines integrating vector databases, LLMs, and proprietary knowledge bases. Build robust agentic workflows using tools like LangGraph , CrewAI , or Agno , enabling autonomous task execution, planning, memory, and tool use. Collaborate with product, engineering, and data teams to translate business requirements into architectural blueprints and technical roadmaps. Define and enforce AI/ML infrastructure best practices , including security, scalability, observability, and model governance. Manage technical road-map, sprint cadence, and 3–5 AI engineers; coach on best practices. Lead AI solution design reviews and ensure alignment with compliance, ethics, and responsible AI standards. Evaluate emerging GenAI & agentic tools; run proofs-of-concept and guide build-vs-buy decisions. Qualifications: 10+ years of experience in AI/ML engineering or data science, with 3+ years in AI architecture or system design. Proven experience designing and deploying LLM-based solutions at scale, including fine-tuning , prompt engineering , and RAG-based systems . Strong understanding of agentic AI design principles , multi-agent orchestration , and tool-augmented LLMs . Proficiency with cloud-native ML/AI services and infrastructure design across AWS, GCP, or Azure. Deep expertise in model lifecycle management, MLOps, and deployment workflows (batch, real-time, streaming). Familiarity with data governance , AI ethics , and security considerations in production-grade systems. Excellent communication and leadership skills, with the ability to influence technical and business stakeholders.

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Company Description North Hires is a premier consulting firm specializing in Custom Software Development, Recruitment, Sourcing, and Executive Search services. Our team of experienced professionals delivers exceptional recruitment solutions tailored to each client's unique needs. We provide various services like Custom Software Development, Recruitment Process Outsourcing, Virtual Employees/Agents, and Digital Marketing Solutions to empower businesses to thrive. Role Description This is a full-time remote role for an AWS Cloud Manager at North Hires. The AWS Cloud Manager will be responsible for managing the AWS cloud infrastructure, providing technical support, troubleshooting system issues, and leading and managing a team. This role is located in Hyderabad with the option for some work from home flexibility. Qualifications Information Technology and Technical Support skills Troubleshooting expertise Team Leadership and Team Management abilities Experience with AWS and cloud technologies Strong problem-solving skills Excellent communication and interpersonal skills Bachelor's degree in Computer Science or related field Extensive experience with AWS tool kit Kubernetes, Docker and Terraform technology

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have hands-on experience in Site Reliability and DevOps, along with expertise in Kubernetes, Docker, Terraform, and CI/CD. As a Level M professional, you will be working in US EST hours with Pune being the preferred location. Your responsibilities will include designing, developing, and deploying software systems and infrastructure to enhance reliability, scalability, and performance. You will be expected to identify manual processes that can be automated to improve operational efficiency. Implementing monitoring and alerting systems to proactively identify and address issues will be a key part of your role. Collaborating with customers for architecture reviews and developing new features to enhance the reliability and scalability of the platform will also be part of your duties. Working closely with various application teams to understand platform issues and design solutions for monitoring and issue resolution will be essential. You will be responsible for designing recovery and resiliency strategies for different applications. Identifying opportunities for technological improvements and the need for new tools to support capacity planning, disaster recovery, and resiliency will also be part of your role. Additionally, you will architect and implement packages/modules that can serve as blueprints for implementation by different application teams.,

Posted 6 days ago

Apply

2.0 years

5 - 18 Lacs

Udaipur, Rajasthan

On-site

Job Title: DevOps Engineer (AWS/Azure) Location: Udaipur, Rajasthan Employment Type: Full-time Job Summary: We are seeking a skilled and proactive DevOps Engineer with hands-on experience in AWS and/or Azure to join our team. In this role, you will design, implement, and maintain scalable, secure, and highly available cloud infrastructure and CI/CD pipelines, enabling rapid and reliable software delivery. Key Responsibilities: Develop, maintain, and improve CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps. Deploy and manage infrastructure using Infrastructure as Code (IaC) tools such as Terraform, AWS Cloud Formation, or Azure Bicep/ARM templates. Automate system configuration and management using tools like Ansible, Chef, or Puppet. Monitor, troubleshoot, and optimize infrastructure performance, using tools like Cloud Watch, Azure Monitor, Prometheus, and Grafana. Implement robust security, compliance, and backup strategies across cloud infrastructure. Collaborate with development teams to ensure smooth and efficient software delivery workflows. Manage containerized applications using Docker and orchestrate them with Kubernetes (EKS/AKS). Ensure high availability and disaster recovery in production environments. Stay up to date with the latest DevOps trends, tools, and best practices. Required Skills & Qualifications: 2+ years of experience as a DevOps Engineer or in a similar role Strong understanding of Linux/Unix systems and scripting Experience with containerization and container orchestration Expertise in one or more IaC tools: Terraform, Cloud Formation, or ARM/Bicep Knowledge of CI/CD pipelines and automation frameworks Familiarity with Git, GitOps, and version control workflows Job Type: Full-time Pay: ₹500,000.00 - ₹1,800,000.00 per year Schedule: Morning shift Location: Udaipur City, Rajasthan (Preferred) Work Location: In person

Posted 6 days ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Summary We are seeking a highly experienced Lead DevOps Engineer to drive the strategy, design, and implementation of our DevOps infrastructure across cloud and on-premises environments. This role requires strong leadership and hands-on expertise in AWS, Azure DevOps, and Google Cloud Platform (GCP), along with deep experience in automation, CI/CD, container orchestration, and system scalability. As a technical leader, you will mentor DevOps engineers, collaborate with cross-functional teams, and establish best practices to ensure reliable, secure, and scalable infrastructure that supports our product lifecycle. Key Responsibilities: Oversee the design, implementation, and maintenance of scalable and secure infrastructure on cloud and on-premises environments cost effectively Implement and manage infrastructure as code (IaC) using tools like Terraform or CloudFormation Manage and optimize CI/CD pipelines to accelerate development cycles and ensure seamless deployments. Implement robust monitoring solutions to proactively identify and resolve issues. Lead incident response efforts to minimize downtime and impact on clients. Develop and implement automation strategies to streamline deployment, monitoring, and maintenance processes. Mentor and guide junior/mid-level DevOps engineers, fostering a culture of learning and accountability. Collaborate with software developers, quality assurance engineers and IT professionals to guarantee smooth deployment, automation and management of software infrastructure. Ensure high standards for security, compliance, and data protection across the infrastructure. Stay up to date with industry trends and emerging technologies, assessing their potential impact and recommending adoption where appropriate. Maintain comprehensive documentation of systems, processes, and procedures to support knowledge sharing and team efficiency. Required Skills and Qualifications 6+ years of hands-on experience in DevOps, infrastructure, or related roles Strong knowledge of cloud platforms including Azure, AWS and GCP Proven experience in containerization using Docker and Kubernetes. Advanced knowledge of Linux systems and networking Strong experience with CI/CD tools like Jenkins, GitHub Actions, Bitbucket Pipeline, TeamCity Solid experience in designing, implementing, and maintaining CI/CD pipelines for automated build, test, and deployment processes Deep understanding of automation, scripting, and Infrastructure as Code (IaC) with Terraform and Ansible. Strong problem-solving and troubleshooting skills, with the ability to identify root causes and implement effective solutions. Excellent leadership, team building, and communication skills Bachelor’s degree in computer science, IT, Engineering, or equivalent practical experience Preferred Skills and Qualifications Relevant certifications (e.g., AWS Certified DevOps Engineer – Professional, GCP DevOps Engineer, or Azure Solutions Architect) Experience working in fast-paced product environments Knowledge of security best practices and compliance standards Key Competencies Leadership and mentoring capabilities in technical teams Strong strategic thinking and decision-making skills Ability to manage multiple priorities in a deadline-driven environment Passion for innovation, automation, and continuous improvement Clear, proactive communication and collaboration across teams Why Join Us At Admaren, we are transforming the maritime domain with state-of-the-art technology. As a Lead DevOps Engineer , you will be at the helm of infrastructure innovation, driving mission-critical systems that support global operations. You’ll have the autonomy to implement cutting-edge practices, influence engineering culture, and grow with a team committed to excellence. Join us to lead from the front and shape the future of maritime software systems.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Technology: Cloud Infrastructure Engineer with Azure, Kubernetes, Terraform Experience:5+ Years Location: 100% Remote Duration: 6 months Cost: 80K per month Working Time: - 4.30 PM to 12:30 AM IST OR 7:30 PM to 3.30 AM IST PRIMARY SKILLS • 5+ years of experience in cloud engineering, infrastructure architecture, or platform engineering roles. • Experience with Kubernetes operations and architecture in production environments. • Strong knowledge of cloud IaaS and PaaS services, and how to design reliable solutions leveraging them (e.g., VMs, load balancers, managed databases, identity platforms, messaging queues, etc.). • Advanced proficiency in Terraform and Git-based infrastructure workflows. • Experience building and maintaining CI/CD pipelines. • Solid scripting abilities in Python, Bash, or PowerShell. • A strong understanding of infrastructure security, governance, and identity best practices. • Ability to work collaboratively across engineering. SECONDARY SKILLS (IF ANY) • Familiarity with GitOps tooling. • Experience with policy-as-code and container security best practices. • Experience with Microsoft Power Platform (Dynamics 365) • Google Cloud Knowledge

Posted 6 days ago

Apply

0.0 - 5.0 years

10 - 16 Lacs

Chennai, Tamil Nadu

On-site

Position Overview: Cloud Platform Engineer will be responsible for developing and maintaining Terraform modules and patterns for AWS and Azure. These modules and patterns will be used for platform landing zones, application landing zones, and application infrastructure deployments. The role involves managing the lifecycle of these patterns, including releases, bug fixes, feature integrations, and updates to test cases. Key Responsibilities: Develop and release Terraform modules, landing zones, and patterns for AWS and Azure. Provide lifecycle support for patterns, including bug fixing and maintenance. Integrate new features into existing patterns to enhance functionality. Release updated and new patterns to ensure they meet current requirements. Update and maintain test cases for patterns to ensure reliability and performance. Qualifications: · 5+ years of AWS/Azure cloud migration experience. · Proficiency in Cloud compute (EC2, EKS, Azure VM, AKS) and Storage (s3, EBS,EFS, Azure Blob, Azure Managed Disks, Azure Files). · Strong knowledge of AWS and Azure cloud services. · Expert in terraform. · AWS/Azure certification preferred. Mandatory Skills: Cloud AWS DevOps .(Migration Exp min 5 Years) Rel Experience: 5-8 Years . Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹1,036,004.19 - ₹1,677,326.17 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Yearly bonus Experience: DevOps: 10 years (Required) AWS: 5 years (Required) Azure: 5 years (Required) Terraform: 5 years (Required) System migration: 5 years (Required) Location: Chennai, Tamil Nadu (Preferred)

Posted 6 days ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Quant Engineer Location: Remote Quant Engineer Job Description: Strong Python developer with up-to-date skills, including web development, cloud (ideally Azure), Docker, testing , devops (ideally terraform + github actions). Data engineering (pyspark, lakehouses, kafka) is a plus. Good understanding of maths, finance as role interacts with quant devs, analysts and traders. Familiarity with e.g. PnL, greeks, volatility, partial derivative, normal distribution etc. Financial and/or trading exposure is nice to have, particularly energy commodities Productionise quant models into software applications, ensuring robust day to day operation, monitoring and back testing are in place Translate trader or quant analyst’s need into software product requirements Prototype and implement data pipelines Co-ordinate closely with analysts and quants during development of models, acting as a technical support and coach Produce accurate, performant, scalable, secure software, and support best practices following defined IT standards Transform proof of concepts into a larger deployable product in Shell and outside. Work in a highly-collaborative, friendly Agile environment, participate in Ceremonies and Continuous Improvement activities. Ensuring that documentation and explanations of results of analysis or modelling are fit for purpose for both a technical and non-technical audience Mentor and coach other teammates who are upskilling in Quants Engineering Professional Qualifications & Skills Educational Qualification Graduation / postgraduation /PhD with 8+ years’ work experience as software developer /data scientist. Degree level in STEM, computer science, engineering, mathematics, or a relevant field of applied mathematics. Good understanding of Trading terminology and concepts (incl. financial derivatives), gained from experience working in a Trading or Finance environment. Required Skills Expert in core Python with Python scientific stack / ecosystem (incl pandas, numpy, scipy, stats), and a second strongly typed language (e.g.: C#, C++, Rust or Java). Expert in application design, security, release, testing and packaging. Mastery of SQL / no-SQL databases, data pipeline orchestration tools. Mastery of concurrent/distributed programming and performance optimisation methods

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Data Engineer at our Bangalore office, you will play a crucial role in developing data pipeline solutions to meet business data needs. Your responsibilities will involve designing, implementing, and maintaining structured and semi-structured data models, utilizing Python and SQL for data collection, enrichment, and cleansing. Additionally, you will create data APIs in Python Flask containers, leverage AI for analytics, and build data visualizations and dashboards using Tableau. Your expertise in infrastructure as code (Terraform) and executing automated deployment processes will be vital for optimizing solutions for costs and performance. You will collaborate with business analysts to gather stakeholder requirements and translate them into detailed technical specifications. Furthermore, you will be expected to stay updated on the latest technical advancements, particularly in the field of GenAI, and recommend changes based on the evolving landscape of Data Engineering and AI. Your ability to embrace change, share knowledge with team members, and continuously learn will be essential for success in this role. To qualify for this position, you should have at least 5 years of experience in data engineering, with a focus on Python programming, data pipeline development, and API design. Proficiency in SQL, hands-on experience with Docker, and familiarity with various relational and NoSQL databases are required. Strong knowledge of data warehousing concepts, ETL processes, and data modeling techniques is crucial, along with excellent problem-solving skills and attention to detail. Experience with cloud-based data storage and processing platforms like AWS, GCP, or Azure is preferred. Bonus skills such as being a GenAI prompt engineer, proficiency in Machine Learning technologies like TensorFlow or PyTorch, knowledge of big data technologies, and experience with data visualization tools like Tableau, Power BI, or Looker will be advantageous. Familiarity with Pandas, Spacy, NLP libraries, agile development methodologies, and optimizing data pipelines for costs and performance are also desirable. Effective communication and collaboration skills in English are essential for interacting with technical and non-technical stakeholders. You should be able to translate complex ideas into simple examples to ensure clear understanding among team members. A bachelor's degree in computer science, IT, engineering, or a related field is required, along with relevant certifications in BI, AI, data engineering, or data visualization tools. The role will be based at The Leela Office on Airport Road, Kodihalli, Bangalore, with a hybrid work schedule allowing you to work from the office on Tuesdays, Wednesdays, Thursdays, and from home on Mondays and Fridays. If you are passionate about turning complex data into valuable insights and have experience in mentoring junior members and collaborating with peers, we encourage you to apply for this exciting opportunity.,

Posted 6 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About The Role We are looking for a passionate and skilled Full Stack Developer with strong experience in React.js , Node.js , and AWS Lambda to build a custom enterprise platform that interfaces with a suite of SDLC tools. This platform will streamline tool administration, automate provisioning and deprovisioning of access, manage licenses, and offer centralized dashboards for governance and monitoring. Required Skills & Qualifications 4–6 years of hands-on experience as a Full Stack Developer Proficient in React.js and component-based front-end architecture Strong backend experience with Node.js and RESTful API development Solid experience with AWS Lambda, API Gateway, DynamoDB, S3, etc. Prior experience integrating and automating workflows for SDLC tools like: JIRA, Jenkins, GitLab, Bitbucket, GitHub, SonarQube, etc. Understanding of OAuth2, SSO, and API key-based authentications Familiarity with CI/CD pipelines, microservices, and event-driven architectures Strong knowledge of Git and modern development practices Good problem-solving skills, and ability to work independently Nice To Have Experience with Infrastructure-as-Code (e.g., Terraform, CloudFormation) Experience with AWS EventBridge, Step Functions, or other serverless orchestration tools Knowledge of enterprise-grade authentication (LDAP, SAML, Okta) Familiarity with monitoring/logging tools like CloudWatch, ELK, or DataDog

Posted 6 days ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra

On-site

COMPANY OVERVIEW Domo's AI and Data Products Platform lets people channel AI and data into innovative uses that deliver a measurable impact. Anyone can use Domo to prepare, analyze, visualize, automate, and build data products that are amplified by AI. POSITION SUMMARY As a DevOps Engineer in Pune, India at Domo, you will play a crucial role in designing, implementing, and maintaining scalable and reliable infrastructure to support our data-driven platform. You will collaborate closely with engineering, product, and operations teams to streamline deployment pipelines, improve system reliability, and optimize cloud environments. If you thrive in a fast-paced environment and have a passion for automation, optimization, and software development, we want to hear from you! KEY RESPONSIBILITIES Design, build, and maintain scalable infrastructure using cloud platforms (AWS, GCP, or Azure) Develop and manage CI/CD pipelines to enable rapid and reliable deployments Automate provisioning, configuration, and management of infrastructure using tools like Terraform, Ansible, Salt, or similar Develop and maintain tooling to automate, facilitate, and monitor operational tasks Monitor system health and performance, troubleshoot issues, and implement proactive solutions Collaborate with software engineers to improve service scalability, availability, and security Lead incident response and post-mortem analysis to ensure service reliability Drive DevOps best practices and continuous improvement initiatives across teams JOB REQUIREMENTS 3+ years of experience in DevOps, Site Reliability Engineering, or infrastructure engineering roles 1+ years working in a SaaS environment Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or a related field Expertise in cloud platforms such as AWS, GCP, or Azure. Certifications preferred. Strong experience with infrastructure as code (Terraform, CloudFormation, etc.) Proficiency in automation and configuration management tools such as Ansible and Salt Hands-on experience with containerization (Docker) and orchestration (Kubernetes) Solid understanding of CI/CD tools (Jenkins, GitHub Actions, etc.) and processes Strong scripting skills in Python, Bash, or similar languages Experience developing applications or tools using Java, Python, or similar programming languages Familiarity with Linux system administration and troubleshooting Experience with version control systems, particularly GitHub Experience with monitoring and logging tools (Prometheus, Grafana, ELK stack, Datadog) Knowledge of networking, security best practices, and cost optimization on cloud platforms Excellent communication and collaboration skills LOCATION: Pune, Maharashtra, India INDIA BENEFITS & PERKS Medical insurance provided Maternity and paternity leave policies Baby bucks: a cash allowance to spend on anything for every newborn or child adopted “Haute Mama”: cash allowance for maternity wardrobe benefit (only for women employees) Annual leave of 18 days + 10 holidays + 12 sick leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit: cash allowance towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Marriage leaves up to 3 days Bereavement leaves up to 5 days Domo is an equal opportunity employer. #LI-PD1 #LI-Hybrid

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra

On-site

IT-ISPune Posted On 31 Jul 2025 End Date 31 Dec 2025 Required Experience 8 - 12 Years Basic Section Grade Role Senior Automation Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice IT-IS Organization Unit IT-IS Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill AUTOMATION POWERSHELL SCRIPTING TERRAFORM MS AZURE & M 365 OFFICE 365 - EXCHANGE ONLINE TEAMS Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION AZ-104: MICROSOFT AZURE ADMINISTRATOR MICROSOFT CERTIFIED: AZURE – FUNDAMENTALS EXAM AZ-900 Working Language ENGLISH Job Description Job Summary: We are seeking a highly experienced and forward-thinking Senior Microsoft Automation Engineer to lead the design, development, and implementation of automation solutions across Microsoft 365 and Azure environments. This role requires deep technical expertise in PowerShell , KQL , Terraform , Azure Functions , and Microsoft-native automation tools such as Power Automate , Azure Automation , and Logic Apps . The ideal candidate will also possess basic knowledge of agentic AI systems , enabling them to explore intelligent automation strategies that enhance operational efficiency and decision-making. Key Responsibilities: Automation Strategy & Development Architect and implement scalable automation solutions across Microsoft 365 (Entra ID, Exchange Online, SharePoint Online, Teams) and Azure services. Develop and maintain advanced PowerShell scripts for administrative tasks, provisioning, and lifecycle automation. Design and deploy Infrastructure as Code (IaC) using Terraform for consistent Azure resource management. Build and manage workflows using Power Automate , Azure Logic Apps , Azure Automation Runbooks , and Azure Functions for event-driven automation. Explore and prototype intelligent automation using agentic AI principles , such as autonomous task execution and decision-making agents. Monitoring, Reporting & Optimization Use Kusto Query Language (KQL) to create queries and dashboards in Azure Monitor , Log Analytics , and Microsoft Sentinel . Implement automated monitoring and alerting systems to ensure service health, compliance, and performance. Analyze cloud usage and implement automation strategies for cost savings , resource optimization , and governance enforcement . Employee Lifecycle Automation Automate onboarding and offboarding processes including account creation, license assignment, mailbox setup, and access provisioning. Integrate with Entra ID for identity lifecycle management and ensure secure transitions. Maintain audit trails and compliance documentation for all lifecycle automation processes. Cloud Migrations & Assessments Lead or support cloud migration projects from on-premises to Microsoft 365 and Azure. Conduct cloud readiness assessments , identify automation opportunities, and develop migration strategies. Collaborate with stakeholders to ensure seamless transitions and high adoption rates. Collaboration & Governance Partner with IT, HR, security, and business teams to gather requirements and deliver automation solutions. Establish governance frameworks for automation workflows, including version control, documentation, and change management. Ensure compliance with data protection, access control, and audit requirements. Required Skills & Qualifications: Technical Expertise Deep experience with Microsoft 365 services : Entra ID, Exchange Online, SharePoint Online, Microsoft Teams. Advanced proficiency in PowerShell scripting . Hands-on experience with Power Automate , Azure Automation , Logic Apps , and Azure Functions . Strong command of KQL for telemetry and log analysis. Proficiency in Terraform for infrastructure provisioning. Familiarity with Microsoft Graph API , REST APIs, and service integrations. Basic understanding of agentic AI concepts , such as autonomous agents, task planning, and intelligent orchestration. Project Experience Proven experience in automating employee lifecycle , service provisioning , and compliance workflows . Hands-on involvement in cloud migrations , brownfield modernization , and greenfield deployments . Experience in cost management , governance , and security best practices in Azure. Soft Skills Strong analytical and problem-solving abilities. Excellent communication and stakeholder engagement skills. Ability to lead cross-functional teams and mentor junior engineers. Commitment to continuous learning and staying updated with Microsoft and AI technologies. Preferred Qualifications: Microsoft Certified: Azure Administrator Associate or equivalent Microsoft Certified: Power Platform Developer Associate Experience with hybrid environments and third-party automation tools Familiarity with DevOps practices and CI/CD pipelines Exposure to AI/ML tools and platforms (e.g., Azure AI, OpenAI APIs)

Posted 6 days ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Principal Engineer at Walmart's Enterprise Business Services, you will play a pivotal role in shaping the engineering direction, driving architectural decisions, and ensuring the delivery of scalable, secure, and high-performing solutions across the platform. Your responsibilities will include leading the design and development of full stack applications, architecting complex cloud-native systems on Google Cloud Platform (GCP), defining best practices, and guiding engineering excellence. You will have the opportunity to work on crafting frontend experiences, building robust backend APIs, designing cloud infrastructure, and influencing the technical vision of the organization. Collaboration with product, design, and data teams to translate business requirements into scalable tech solutions will be a key aspect of your role. Additionally, you will champion CI/CD pipelines, Infrastructure as Code (IaC), and drive code quality through rigorous design reviews and automated testing. To be successful in this role, you are expected to bring 10+ years of experience in full stack development, with at least 2+ years in a technical leadership or principal engineering role. Proficiency in JavaScript/TypeScript, Python, or Go, along with expertise in modern frontend frameworks like React, is essential. Strong experience in cloud-native systems on GCP, microservices architecture, Docker, Kubernetes, and event-driven systems is required. Your role will also involve managing production-grade cloud systems, working with SQL and NoSQL databases, and staying ahead of industry trends by evaluating new tools and frameworks. Exceptional communication, leadership, and collaboration skills are crucial, along with a GCP Professional Certification and experience with serverless platforms and observability tools. Joining Walmart Global Tech means being part of a team that makes a significant impact on millions of people's lives through innovative technology solutions. You will have the opportunity to work in a flexible, hybrid environment that promotes collaboration and personal development. In addition to a competitive compensation package, Walmart offers various benefits and a culture that values diversity, inclusion, and belonging for all associates. As an Equal Opportunity Employer, Walmart fosters a workplace where unique styles, experiences, and identities are respected and valued, creating a welcoming environment for all.,

Posted 6 days ago

Apply

0.0 - 4.0 years

0 Lacs

delhi

On-site

As a DevOps Intern at LiaPlus AI, you will play a crucial role in building, automating, and securing our AI-driven infrastructure. You will be working closely with our engineering team to optimize cloud operations, enhance security & compliance, and streamline deployments using DevOps & MLOps best practices. Your primary responsibilities will include infrastructure management by deploying and managing cloud resources on Azure as the primary platform. You will also be responsible for setting up robust CI/CD pipelines for seamless deployments, ensuring systems align with ISO, GDPR, and SOC 2 compliance for security & compliance, setting up monitoring dashboards and logging mechanisms for system observability, managing and optimizing PostgreSQL, MongoDB, Redis for performance, writing automation scripts using Terraform, Ansible, and Bash for automation & scripting, managing API gateways like Kong and Istio for network & API gateway management, implementing failover strategies for disaster recovery & HA to ensure system reliability, and deploying and monitoring AI models using Kubernetes and Docker for AI model deployment & MLOps. Requirements: - Currently pursuing or recently completed a Bachelor's/Master's in Computer Science, IT, or a related field. - Hands-on experience with Azure, CI/CD tools, and scripting languages (Python, Bash). - Understanding of security best practices and cloud compliance (ISO, GDPR, SOC 2). - Knowledge of database optimization techniques (PostgreSQL, MongoDB, Redis). - Familiarity with containerization, orchestration, and AI model deployment (Docker, Kubernetes). - Passion for automation, DevOps, and cloud infrastructure. Benefits: - Competitive compensation package. - Opportunity to work with cutting-edge technologies in AI-driven infrastructure. - Hands-on experience in a fast-paced and collaborative environment. - Potential for growth and learning opportunities in the field of DevOps and MLOps. Join us at LiaPlus AI to be part of a dynamic team that is reshaping the future of AI infrastructure through innovative DevOps practices.,

Posted 6 days ago

Apply

3.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. As an Infrastructure Engineer, you will be responsible for the technical design, planning, implementation, and optimization of performance tuning and recovery procedures for critical enterprise systems and applications. You will serve as the technical authority in system administration for complex SaaS, local, and cloud-based environments. Your role is critical in ensuring the high availability, reliability, and scalability of our infrastructure components. You will also be involved in designing philosophies, tools, and processes to enable the rapid delivery of evolving products. In This Role You Will Design, configure, and document cloud-based infrastructures using AWS Virtual Private Cloud (VPC) and EC2 instances in AWS. Secure and monitor hosted production SaaS environments provided by third-party partners. Define, document, and manage network configurations within AWS VPCs and between VPCs and data center networks, including firewall, DNS, and ACL configurations. Lead the design and review of developer work on DevOps tools and practices. Ensure high availability and reliability of infrastructure components through monitoring and performance tuning. Implement and maintain security measures to protect infrastructure from threats. Collaborate with cross-functional teams to design and deploy scalable solutions. Automate repetitive tasks and improve processes using scripting languages such as Python, PowerShell, or BASH. Support Airflow DAGs in the Data Lake, utilizing the Spark framework and Big Data technologies. Provide support for infrastructure-related issues and conduct root cause analysis. Develop and maintain documentation for infrastructure configurations and procedures. Administer databases, handle data backups, monitor databases, and manage data rotation. Work with RDBMS and NoSQL systems, leading stateful data migration between different data systems. Experience & Qualifications Bachelor’s or Master’s degree in Information Science, Computer Science, Business, or equivalent work experience. 3-5 years of experience with Amazon Web Services, particularly VPC, S3, EC2, and EMR. Experience in setting up new VPCs and integrating them with existing networks is highly desirable. Experience in maintaining infrastructure for Data Lake/Big Data systems built on the Spark framework and Hadoop technologies. Experience with Active Directory and LDAP setup, maintenance, and policies. Workday certification is preferred but not required. Exposure to Workday Integrations and Configuration is preferred. Strong knowledge of networking concepts and technologies. Experience with infrastructure automation tools (e.g., Terraform, Ansible, Chef). Familiarity with containerization technologies like Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Strong verbal and written communication skills. Understanding of Agile project methodologies, including Scrum and Kanban, is required. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 6 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

The goal of Digital Workplace (DW) is to make a significant and positive impact on the way employees work, collaborate, and share information at Airbus, paving the way for the digital transformation of the company. This involves deploying and supporting new tools and technologies to enhance cross-functional collaboration, productivity, company integration, transparency, and information exchange. A key factor in achieving success for Digital Workplace is a well-defined strategy, effective transformation plan, and operational controls. At Devices & Services PSL level (DWD), the focus is on providing modern endpoint services and access to Airbus business applications for employees regardless of their location. To lead the development, standardization, and lifecycle management of global Linux/Unix products & services, we are seeking a skilled and strategic Technical Lead for Linux & Unix Administration. This role plays a critical part in aligning technical capabilities with business requirements, driving efficiency, and ensuring service excellence for our user base. The Technical Lead will serve as the interface between infrastructure teams, operations, and end-user stakeholders, owning the technical product vision, roadmap, and continuous improvement of Unix/Linux-based services. Qualification & Experience: - Bachelor/Master Degree in Computer Science, Computer Engineering, Information Technology, or a relevant field - 7-10 years of experience in Linux and Unix Primary Responsibilities: - Own the technical product lifecycle for Linux/Unix administration services - Enhance security & monitoring of existing and future Linux/Unix systems - Support decommissioning of outdated systems and implement modern solutions - Strengthen the integration of solutions across applications, systems, and platforms - Define and maintain the technical product vision, strategy, and roadmap - Collaborate with SMEs, security, and regional teams to ensure services meet standards - Manage vendor relationships, licensing, and budget for Linux/Unix tools - Champion automation, self-service, and modernization within the Linux/Unix technical product space IT Service Management & Strategic Responsibilities: - Good knowledge of ITIL and experience with ITSM frameworks - Strong understanding of Linux/Unix systems from a product or operational standpoint - Experience in working in a large enterprise or global environment - Demonstrated ability to manage complex roadmaps and lead cross-functional teams - Familiarity with ITIL, Agile/Scrum, and product lifecycle management tools - Strong communication and stakeholder engagement skills - Capable of managing Partners, Suppliers, and Subcontractors Good to Have: - Hands-on experience with automation tools - Exposure to security and compliance frameworks - Experience with cloud-based Linux workloads - Previous work in a hybrid infrastructure environment - Comprehensive knowledge to manage macOS environments effectively Are you ready to take on this exciting challenge and be part of our innovative team at Airbus ,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

udaipur, rajasthan

On-site

We are looking for a skilled DevOps Engineer to join our technology team and play a key role in automating and enhancing development operations. Your responsibilities will include designing and implementing CI/CD pipelines to facilitate quick development and deployment, automating infrastructure provisioning using tools such as Terraform, CloudFormation, or Ansible, monitoring system performance, managing cloud infrastructure on AWS/Azure/GCP, and ensuring system reliability and scalability by implementing containerization and orchestration using Docker and Kubernetes. You will collaborate with various teams including developers, QA, and security to streamline software delivery, maintain configuration management and version control using Git, and ensure system security through monitoring, patch management, and vulnerability scans. Additionally, you will be responsible for assisting with system backups, disaster recovery plans, and rollback strategies. The ideal candidate should have a strong background in CI/CD tools like Jenkins, GitLab CI, or CircleCI, proficiency in cloud services (AWS, Azure, or GCP), experience in Linux system administration and scripting (Bash, Python), and hands-on experience with Docker and Kubernetes in production environments. Familiarity with monitoring/logging tools such as Prometheus, Grafana, ELK, and CloudWatch, as well as good knowledge of networking, DNS, load balancers, and firewalls are essential. Preferred qualifications for this role include a Bachelor's degree in Computer Science, IT, or a related field, DevOps certifications (e.g., AWS Certified DevOps Engineer, CKA/CKAD, Terraform Associate), experience in MLOps, Serverless architectures, or microservices, and knowledge of security practices in cloud and DevOps environments. If you are passionate about DevOps and have the required skills and experience, we would like to hear from you.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

ghaziabad, uttar pradesh

On-site

As a Pipeline Engineer, you will be responsible for designing and implementing scalable CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions. Your role will involve setting up infrastructure automation using IaC tools like Terraform and Ansible to provision and manage resources efficiently. You will also focus on integrating automated builds, testing, and static code analysis to maintain code quality and streamline the development process. Managing different environments (dev/staging/production) and ensuring the security of secrets will be part of your daily tasks. Utilizing containerization technologies like Docker and Kubernetes for deployments will be essential. Additionally, implementing monitoring and logging tools such as Prometheus, Grafana, and ELK for tracking performance and debugging issues will be crucial for the success of the projects. Security will be a top priority as you enforce best practices and compliance checks in the pipeline to safeguard the infrastructure and data. Collaborating with teams, documenting processes, and providing training to ensure smooth operations and effective communication will be key to your success. Continuous optimization of the pipeline performance and automating incident management processes to ensure reliability and quick resolution of issues will be part of your responsibilities. Your dedication to enhancing the pipeline efficiency and reliability will play a significant role in the overall success of the projects.,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

vadodara, gujarat

On-site

You are looking for a highly skilled Cloud Network and Security Engineer (L4) with expertise in cloud infrastructure, network security, and hands-on experience with Palo Alto firewalls. In this role, you will be responsible for designing, implementing, and maintaining secure cloud and hybrid network environments to align with business objectives and ensure compliance with security standards. As a Cloud Network and Security Engineer, your key responsibilities will include designing, implementing, and managing cloud network security architectures in AWS, Azure, or GCP. You will configure, maintain, and troubleshoot Palo Alto firewalls and security appliances, develop and enforce security policies, conduct security assessments and risk analysis, manage VPNs, routing, and integrate firewall solutions with monitoring tools. Collaboration with DevOps and Cloud teams to ensure secure deployment pipelines and implementing Zero Trust Network Access strategies are also part of your role. Additionally, you will create and maintain network documentation, participate in incident response, and post-mortem reporting. To qualify for this role, you should have at least 6 years of experience in network and security engineering roles, strong hands-on experience with Palo Alto Networks, and familiarity with cloud-native networking and security services. A deep understanding of TCP/IP, routing, switching, DNS, DHCP, and cloud security frameworks is required. Knowledge of automation/scripting (Python, Terraform, Ansible) and professional certifications like PCNSE, CCNP Security, or AWS/Azure Security are preferred. The ideal candidate will possess strong analytical and problem-solving skills, excellent communication and documentation abilities, and the ability to work independently and collaboratively in a global team environment. Preferred certifications include PCNSE, AWS Certified Security - Specialty, Azure Security Engineer Associate, CISSP, or CCSP. Wipro is looking for individuals who are inspired by reinvention and are willing to evolve constantly. Join a business that values purpose and empowers you to design your own reinvention. Realize your ambitions at Wipro, where applications from people with disabilities are explicitly welcome.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

chandigarh

On-site

As a Telecom Developer specializing in NodeJS, Asterisk, and SIP, you will be responsible for deploying telecom applications on private and public cloud platforms like Red Hat OpenStack, OpenShift, AWS, Azure, and GCP. Your role will involve the installation, acceptance, and performance management of 5G and O-RAN applications. Additionally, you will work on code pipeline and DevOps tools such as Jenkins, Git, GitHub, Bitbucket, Terraform, Azure DevOps, Kubernetes, and AWS DevOps. Your main responsibilities will include developing and implementing telecom solutions using Asterisk and SIP protocol to ensure high performance and scalability. You will utilize NodeJS for backend development, building robust services and APIs that integrate with telecom systems for seamless communication. Collaborating closely with architects and developers, you will translate requirements into technical solutions, participate in design and code reviews, and focus on optimizing and scaling applications to handle growing traffic and user demands. Integration with CRM systems like Salesforce, Zoho, and Leads Square will also be a key aspect of your role to ensure seamless data flow and synchronization. To excel in this role, you should have proven experience in developing telecom applications, a strong proficiency in NodeJS for backend development, and a deep understanding of telecom protocols, especially SIP, and Asterisk PBX. Familiarity with CRM integration, problem-solving skills, and the ability to work effectively in a collaborative environment are essential. A degree in Computer Science, Telecommunications, or a related field is preferred, along with at least 5 years of relevant work experience. In addition to technical skills, you should possess behavioral competencies such as attention to detail, customer engagement, proactive self-management, innovation, creativity, and adaptability. This full-time, permanent position requires in-person work and seeks candidates who can demonstrate hands-on experience with Asterisk, NodeJS, and SIP. A Bachelor's degree is preferred, and additional skills in VoIP, WebRTC, and cloud-based telephony solutions would be advantageous.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Software Engineer (Cloud Development) at our company, you will have the opportunity to be a key part of the Cloud development group in Bangalore. We are seeking passionate individuals who are enthusiastic problem solvers and experienced Cloud engineers to help us build and maintain Synamedia GO product and Infinite suite of solutions. Your role will involve designing, developing, and deploying solutions using your deep-rooted programming and system experience for the next generation of products in the domain of Video streaming. Your key responsibilities will include conducting technology evaluations, developing proof of concepts, designing Cloud distributed microservices features, writing code, conducting code reviews, continuous integration, continuous deployment, and automated testing. You will work as part of a development team responsible for building and managing microservices for the platform. Additionally, you will play a critical role in the design and development of services, overseeing the work of junior team members, collaborating in a multi-site team environment, and ensuring the success of your team by delivering high-quality results in a timely manner. To be successful in this role, you should have a strong technical background with experience in cloud design, development, deployment, and high-scale systems. You should be proficient in loosely coupled design, Microservices development, Message queues, and containerized applications deployment. Hands-on experience with technologies such as NodeJS, Java, GoLang, and Cloud Technologies like AWS, EKS, Open stack is required. You should also have experience in DevOps, CI/CD pipeline, monitoring tools, and database technologies. We are looking for highly motivated individuals who are self-starters, independent, have excellent analytical and logical skills, and possess strong communication abilities. You should have a Test-Driven Development (TDD) mindset, be open to supporting incidents on Production deployments, and be willing to work outside of regular business hours when necessary. At our company, we value diversity, inclusivity, and equal opportunity. We offer flexible working arrangements, skill enhancement and growth opportunities, health and wellbeing programs, and the chance to work collaboratively with a global team. We are committed to fostering a people-friendly environment, where all our colleagues can thrive and succeed. If you are someone who is eager to learn, ask challenging questions, and contribute to the transformation of the future of video, we welcome you to join our team. We offer a culture of belonging, where innovation is encouraged, and we work together to achieve success. If you are interested in this role or have any questions, please reach out to our recruitment team for assistance.,

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Lead the team building Zeller's cutting-edge mobile applications Join Zeller’s growing engineering team of 150 as a React Native focussed Tech Lead combining hands-on technical contribution with people leadership. You’ll manage a remote team of three engineers while taking ownership of the systems behind Zeller’s Mobile App and Point of Sale software. Leading your team from the front and staying deeply technical as a player-coach, this role offers the opportunity for high impact technical leadership with formal people management responsibilities. About The Team Zeller’s Mobile team is responsible for developing and scaling the customer-facing mobile applications that empower Australian businesses (soon international). Your new team works with product managers, designers, backend engineers, and others to build and enhance the Zeller Mobile App and Zeller Point of Sale products, encompassing a wide range of functionality including mobile banking and payment solutions. What You'll Do: Technical Leadership & System Ownership Drive architectural solutions collaboratively with your team and engineering leadership, to build and scale Zeller’s mobile products Write production code regularly as a core contributor while maintaining oversight of your team's technical deliverables Own operational excellence for your team’s systems including monitoring, incident response, and reliability improvements Champion modern development practices including active use of LLMs within your team and the adoption of technical practices at Zeller Lead a team of 3 software engineers, identifying skill gaps, managing personal development plans and fostering a positive culture of learning, ownership, and psychological safety Partner closely with product managers as joint owners of your product success, contributing technical and delivery insight to shape what gets built What We're Looking For You have owned solution design for complex features within mobile apps before and have worked with applications used at scale, or which were mission critical to their customer You have worked in product-driven companies and have a track record of driving pace and deliveryYou are experienced in mobile system design and demonstrate well reasoned, rational decision making with the ability to navigate between detail and broader context You have exceptional written and verbal communication and can move from a conversation with non-technical stakeholders to reframing a project into detailed designs requirements for your engineering team You have a strong self-learning drive and are actively using and adapting your workflows to incorporate generative AI tools Prior experience with React Native is highly valued, and experience with payment or mobile banking solutions is a plus. Technical Background You Should Be Comfortable Writing Production React Native Code, But Experience With Our Full Tech-stack Is Not An Expectation. Some Tools We Use React Native for cross-platform mobile development, bridging to native code to deliver features like Apple/Google wallet integration A modern graphql based api which drives an optimised frontend datastore capability Extensive application monitoring tools such as Sentry, Datadog, Segment A backend built on a serverless, event driven system architecture running on AWS. A rigorous approach to continuous integration and system correctness that includes infrastructure as code (CDK, Terraform), and high coverage of unit, integration, and system tests built with Github Actions

Posted 6 days ago

Apply

12.0 - 16.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Principal Site Reliability Engineer, you will be responsible for leading all infrastructure aspects of a new cloud-native, microservice-based security platform. This platform is fully multi-tenant, operates on Kubernetes, and utilizes the latest cloud-native CNCF technologies such as Istio, Envoy, NATS, Fluent, Jaeger, and Prometheus. Your role will involve technically leading an SRE team to ensure high-quality SLA for a global solution running in multiple regions. Your responsibilities will include building tools and frameworks to enhance developer efficiency on the platform and abstracting infrastructure complexities. Automation and utilities will be developed to streamline service operation and monitoring. The platform handles large amounts of machine-generated data daily and is designed to manage terabytes of data from numerous customers. You will actively participate in platform design discussions with development teams, providing infrastructure insights and managing technology and business tradeoffs. Collaboration with global engineering teams will be crucial as you contribute to shaping the future of Cybersecurity. At GlobalLogic, we prioritize a culture of caring, where people come first. You will experience an inclusive environment promoting acceptance, belonging, and meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Continuous learning and development are essential at GlobalLogic. You will have access to numerous opportunities to expand your skills, advance your career, and grow personally and professionally. Our commitment to your growth includes programs, training curricula, and hands-on experiences. GlobalLogic is recognized for engineering impactful solutions worldwide. Joining our team means working on projects that make a difference, stimulating your curiosity and problem-solving skills. You will engage in cutting-edge solutions that shape the world today. We value balance and flexibility, offering various career paths, roles, and work arrangements to help you achieve a harmonious work-life balance. At GlobalLogic, integrity is key, and we uphold a high-trust environment focused on ethics and reliability. You can trust us to provide a safe, honest, and ethical workplace dedicated to both employees and clients. GlobalLogic, a Hitachi Group Company, is a leading digital engineering partner to top global companies. With a history of digital innovation since 2000, we collaborate with clients to create innovative digital products and experiences, driving business transformation and industry redefinition through intelligent solutions.,

Posted 6 days ago

Apply

2.0 - 8.0 years

0 Lacs

karnataka

On-site

You are a Technical Project Manager at IAI Solution Pvt Ltd, a company specializing in applied AI solutions. You will lead software projects, translate business goals into technical roadmaps, and coordinate delivery across frontend and backend teams. Your responsibilities include overseeing deployments using cloud platforms like Azure/AWS/GCP, managing CI/CD pipelines, and ensuring project timelines and resource planning. To excel in this role, you must have 8+ years of software engineering experience, including 2+ years as a Technical Project Manager or Technical Lead with proficiency in JavaScript, Java, Python, and Spring Boot. Experience in cloud and solution architecture, managing technical teams, and Agile Project Management using tools like Jira is essential. Startup experience is preferred, along with strong communication skills and familiarity with DevOps practices. Your technical stack will include technologies such as React.js, Next.js, Python, FastAPI, Django, Spring Boot, Azure, AWS, Docker, Kubernetes, Terraform, PostgreSQL, MongoDB, Redis, Kafka, RabbitMQ, Prometheus, Grafana, and ELK Stack. Good-to-have skills include exposure to AI/ML projects, microservices, performance tuning, and certifications like PMP, CSM, CSPO, SAP Activate, PRINCE2, AgilePM, and ITIL. As part of the team, you will enjoy competitive compensation, performance incentives, and the opportunity to work on high-impact software and AI initiatives in a product-driven, fast-paced environment. Additionally, you will benefit from a flexible work culture, learning support, and health benefits.,

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies