Jobs
Interviews

18017 Terraform Jobs - Page 45

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job ID: Pyt-ETP-Pun-1075 Location: Pune Company Overview Bridgenext is a Global consulting company that provides technology-empowered business solutions for world-class organizations. Our Global Workforce of over 800 consultants provide best in class services to our clients to realize their digital transformation journey. Our clients span the emerging, mid-market and enterprise space. With multiple offices worldwide, we are uniquely positioned to deliver digital solutions to our clients leveraging Microsoft, Java and Open Source with a focus on Mobility, Cloud, Data Engineering and Intelligent Automation. Emtec’s singular mission is to create “Clients for Life” – long-term relationships that deliver rapid, meaningful, and lasting business value. At Bridgenext, we have a unique blend of Corporate and Entrepreneurial cultures. This is where you would have an opportunity to drive business value for clients while you innovate and continue to grow and have fun while doing it. You would work with team members who are vibrant, smart and passionate and they bring their passion to all that they do – whether it’s learning, giving back to our communities or always going the extra mile for our client. Position Description We are looking for members with hands-on Data Engineering experience who will work on the internal and customer-based projects for Bridgenext. We are looking for someone who cares about the quality of code and who is passionate about providing the best solution to meet the client needs and anticipate their future needs based on an understanding of the market. Someone who worked on Hadoop projects including processing and data representation using various AWS Services. Must Have Skills 3-4 years of overall experience Strong programming experience with Python Experience with unit testing, debugging, and performance tuning. Experience with Docker, Kubernetes, and cloud platforms (AWS preferred) Experience with CI/CD pipelines and DevOps best practices. Familiarity with workflow management tools like Airflow. Experience with DBT is a plus. Good to have experience with infrastructure-as-a-code technologies such as Terraform, Ansible Good to have experience in Snowflake modelling – roles, schema, databases. Professional Skills Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

Remote

Job description #hiring #Senior Backend Developer Min Experience: 10+ Years Location: Remote We are seeking a highly experienced Technical Lead with over 10 years of experience, including at least 2 years in a leadership role, to guide and mentor a dynamic engineering team. This role is critical to designing, developing, and optimizing high-performance, scalable, and reliable backend systems. The ideal candidate will have deep expertise in Python (Flask), AWS (Lambda, Redshift, Glue, S3), Microservices, and Database Optimization (SQL, RDBMS). We operate in a high-performance environment, comparable to leading product companies, where uptime, defect reduction, and data clarity are paramount. As a Technical Lead, you will ensure engineering excellence, maintain high-quality standards, and drive innovation in software architecture and development. Key Responsibilities: · Own backend architecture and lead the development of scalable, efficient web applications and microservices. · Ensure production-grade AWS deployment and maintenance with high availability, cost optimization, and security best practices. · Design and optimize databases (RDBMS, SQL) for performance, scalability, and reliability. · Lead API and microservices development, ensuring seamless integration, scalability, and maintainability. · Implement high-performance solutions, emphasizing low latency, uptime, and data accuracy. · Mentor and guide developers, fostering a culture of collaboration, disciplined coding, and technical excellence. · Conduct technical reviews, enforce best coding practices, and ensure adherence to security and compliance standards. · Drive automation and CI/CD pipelines to enhance deployment efficiency and reduce operational overhead. · Communicate technical concepts effectively to technical and non-technical stakeholders. · Provide accurate work estimations and align development efforts with broader business objectives. Key Skills: Programming: Strong expertise in Python (Flask) and Celery. AWS: Core experience with Lambda, Redshift, Glue, S3, and production-level deployment strategies. Microservices & API Development: Deep understanding of architecture, service discovery, API gateway design, observability, and distributed systems best practices. Database Optimization: Expertise in SQL, PostgreSQL, Amazon Aurora RDS, and performance tuning. CI/CD & Infrastructure: Experience with GitHub Actions, GitLab CI/CD, Docker, Kubernetes, Terraform, and CloudFormation. Monitoring & Logging: Familiarity with AWS CloudWatch, ELK Stack, and Prometheus. Security & Compliance: Knowledge of backend security best practices and performance optimization. Collaboration & Communication: Ability to articulate complex technical concepts to international stakeholders and work seamlessly in Agile/Scrum environments. 📩 Apply now or refer someone great. Please share your updated resume to hr.team@kpitechservices.com #PythonJob #jobs #BackendDeveloper

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills and Qualifications: 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities Include Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Key Responsibilities: Design, develop, and maintain high-performance ETL and real-time data pipelines using Apache Kafka and Apache Flink. Build scalable and automated MLOps pipelines for model training, validation, and deployment using AWS SageMaker and related services. Implement and manage Infrastructure as Code (IaC) using Terraform for AWS provisioning and maintenance. Collaborate with ML, Data Science, and DevOps teams to ensure reliable and efficient model deployment workflows. Optimize data storage and retrieval strategies for both structured and unstructured large-scale datasets. Integrate and transform data from multiple sources into data lakes and data warehouses. Monitor, troubleshoot, and improve performance of cloud-native data systems in a fast-paced production setup. Ensure compliance with data governance, privacy, and security standards across all data operations. Document data engineering workflows and architectural decisions for transparency and maintainability. Requirements 5+ Years of experience as Data Engineer or in similar role Proven experience in building data pipelines and streaming applications using Apache Kafka and Apache Flink. Strong ETL development skills, with deep understanding of data modeling and data architecture in large-scale environments. Hands-on experience with AWS services, including SageMaker, S3, Glue, Lambda, and CloudFormation or Terraform. Proficiency in Python and SQL; knowledge of Java is a plus, especially for streaming use cases. Strong grasp of MLOps best practices, including model versioning, monitoring, and CI/CD for ML pipelines. Deep knowledge of IaC tools, particularly Terraform, for automating cloud infrastructure. Excellent analytical and problem-solving abilities, especially with regard to data processing and deployment issues. Agile mindset with experience working in fast-paced, iterative development environments. Strong communication and team collaboration skills.

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Sia is a next-generation, global management consulting group. Founded in 1999, we were born digital. Today our strategy and management capabilities are augmented by data science, enhanced by creativity and driven by responsibility. We’re optimists for change and we help clients initiate, navigate and benefit from transformation. We believe optimism is a force multiplier, helping clients to mitigate downside and maximize opportunity. With expertise across a broad range of sectors and services, our consultants serve clients worldwide. Our expertise delivers results. Our optimism transforms outcomes. Heka.ai is the independent brand of Sia Partners dedicated to AI solutions. We host many AI-powered SaaS solutions that can be combined with consulting services or used independently, to provide our customers with solutions at scale. Job Description We are looking for a skilled Senior Software Engineer to play a key role in our front-end development using ReactJS. This role involves enhancing user interface components and implementing well-conceived designs into our AI-powered SaaS solutions. You will collaborate with backend teams and designers to ensure seamless application performance and a high-quality user experience. Key Responsibilities Front-End Development: Develop and optimize sophisticated user interfaces using ReactJS. Ensure technical feasibility of UI/UX designs. Performance Optimization: Enhance application performance on the client side by implementing state management solutions and optimizing component rendering. Cross-Browser Compatibility: Ensure that applications perform consistently across different browsers and platforms. Collaboration: Work closely with backend developers and web designers to meet technical and consumer needs. Code Integrity: Maintain and improve code quality through writing unit tests, automation, and performing code reviews. Infrastructure as Code (IaC): Utilize Terraform and Helm to manage cloud infrastructure, ensuring scalable and efficient deployment environments. Cloud Deployment & CI Management: Work with GCP / AWS / Azure for deploying and managing applications in the cloud. Oversee continuous software integration processes including tests writing and artifacts building. Qualifications Education: Bachelor’s/master's degree in computer science, Software Engineering, or a related field. Experience: 3-6 years of experience in frontend development, with significant expertise in ReactJS. Skills: Expertise in ReactJS, NextJS, and Node.js. Experience with REST and GraphQL APIs. Proficient in JavaScript, TypeScript, and HTML/CSS. Familiar with Git, CI/CD, and Figma. Strong knowledge of micro-frontends, accessibility standards, and APM tools. Familiar with newer specifications of ECMAScript. Knowledge of isomorphic React is a plus. Infrastructure as Code (IaC) skills with Terraform and Helm for efficient cloud infrastructure management. Hands-on experience in deploying and managing applications using GCP, AWS, or Azure. Ability to understand business requirements and translate them into technical requirements. Additional Information What We Offer: Opportunity to lead cutting-edge AI projects in a global consulting environment. Leadership development programs and training sessions at our global centers. A dynamic and collaborative team environment with diverse projects. Sia is an equal opportunity employer. All aspects of employment, including hiring, promotion, remuneration, or discipline, are based solely on performance, competence, conduct, or business needs.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Join us as a Cloud & DevOps Engineer at Dedalus, one of the global leaders in healthcare technology – working from our Noida Office in India to shape the future of digital health infrastructure. What you’ll achieve As a Cloud & DevOps Engineer, you will play a key role in building and maintaining a scalable, secure, and resilient platform to support continuous integration, delivery, and operations of modern healthcare applications. Your work will directly contribute to enabling development teams to deliver better, faster, and safer solutions for patients and providers around the world. You will: Design and maintain tooling for deployment, monitoring, and operations of containerized applications across hybrid cloud and on-premises infrastructure Implement and manage Kubernetes-based workloads ensuring high availability, scalability, and security Develop new platform features using Go or Java, and maintain existing toolchains Automate infrastructure provisioning using IaC tools such as Terraform, Helm, or Ansible Collaborate with cross-functional teams to enhance platform usability and troubleshoot issues Participate in incident response and on-call rotation to ensure uptime and system resilience Create and maintain architecture and process documentation for shared team knowledge Take the next step towards your dream career At DH Healthcare, your work will empower clinicians and health professionals to deliver better care through reliable and modern technology. Join us and help shape the healthcare landscape by enabling the infrastructure that powers mission-critical healthcare systems. Here’s what you’ll need to succeed: Essential Requirements 5+ years of experience in DevOps, Cloud Engineering, or Platform Development roles Strong background in software engineering and/or system integrations Proficiency in Go, Java, or similar languages Hands-on experience with containerization and orchestration (Docker, Kubernetes) Experience with CI/CD pipelines and DevOps methodologies Practical knowledge of IaC tools like Terraform, Helm, Ansible Exposure to Linux, Windows, and cloud-native environments Strong written and verbal communication skills in English Bachelor’s degree in Computer Science, Information Systems, or equivalent Desirable Requirements Experience supporting large-scale or enterprise healthcare applications Familiarity with Agile/Scrum practices and DevSecOps tools Exposure to hybrid infrastructure and cloud operations Enthusiastic about automation, security, and performance optimization Passion for continuous improvement and collaboration We are DH Healthcare – Come join us At DH Healthcare, we are committed to transforming care delivery through smart, scalable, and resilient platforms. We value innovation, collaboration, and a deep sense of purpose in everything we do. You will join a global team dedicated to improving patient outcomes and supporting health professionals with technology that truly matters. With a team of 7,600+ professionals across 40+ countries, we believe that every role – including yours – helps deliver better healthcare to millions across the globe. If you’re ready to be part of something meaningful, apply now. Application closing date: 18th of August 2025 Read more about our Diversity & Inclusion Commitment

Posted 1 week ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job description: We are seeking a highly experienced and strategic Senior Manager Cloud engineering to lead our Noida SRE and Cloud Engineering teams and drive the evolution of our infrastructure, CI/CD pipelines, and cloud operations. This role is ideal for a hands-on leader who thrives in a fast-paced environment and is passionate about automation, scalability, and reliability who can collaborate and communicate effectively. Key Responsibilities: Leadership & Strategy Lead and mentor a team of DevOps teams, fostering a culture of collaboration, innovation, and continuous improvement. Define and implement DevOps strategies aligned with business goals and engineering best practices. Collaborate with software engineering, QA, and product teams to ensure seamless integration and deployment. Infrastructure & Automation Oversee the design, implementation, and maintenance of scalable cloud infrastructure (AWS). * Drive automation of infrastructure provisioning, configuration management, and deployment processes. Ensure high availability, performance, and security of production systems. CI/CD & Monitoring Architect and maintain robust CI/CD pipelines to support rapid development and deployment cycles. Implement monitoring, logging, and alerting systems to ensure system health and performance. Manage incident response and root cause analysis for production issues. Governance & Compliance Ensure compliance with security policies, data protection regulations, and industry standards. Develop and enforce operational best practices, including disaster recovery and business continuity planning. Qualifications: Bachelor’s or master’s degree in computer science, Engineering, or related field. 8+ years of experience in DevOps, Site Reliability Engineering, or Infrastructure Engineering and understanding of best practices. 5+ years in a leadership or managerial role. Expertise in AWS and infrastructure-as-code tools (Terraform, CloudFormation). Strong experience with CI/CD tools (Jenkins, GitHub CI, Tekton) and container orchestration (Docker, Kubernetes). Proficiency in scripting languages (Python, Bash, Go, PowerShell). Excellent communication, problem-solving, and project management skills. Problem-solving mindset with a focus on continuous improvement. Preferred Qualification s: Certifications in cloud technologies (AWS Certified DevOps Engineer, etc.). Experience with security and compliance frameworks (SOC 2, ISO 27001). Experience with Agile methodologies and familiarity DevSecOps practices. Experience with managing .Net environments and Kubernetes clusters

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Key Responsibilities: Cloud Network Design: Design, implement, and manage network architectures for cloud environments, ensuring high availability, performance, and security across cloud platforms GCP Network Architecture mandatory. Network Configuration & Management: Configure and manage cloud networking services such as Virtual Private Cloud (VPC), subnets, IP addressing, routing, VPNs, and DNS. Connectivity and Integration: Develop and maintain connectivity solutions between on-premise networks and cloud environments, including hybrid cloud configurations and Direct Connect/ExpressRoute solutions. Security & Compliance: Implement and enforce network security policies, including firewall rules, access control lists (ACLs), and VPNs, ensuring compliance with industry standards and best practices. Network Monitoring & Troubleshooting: Continuously monitor cloud network performance, identify issues, and troubleshoot network-related problems to minimize downtime and ensure smooth operation. Performance Optimization: Analyze network performance and recommend optimizations to reduce latency, improve bandwidth utilization, and enhance overall network efficiency in the cloud. Collaboration & Documentation: Collaborate with cloud architects, DevOps teams, and other stakeholders to ensure network architecture aligns with business goals. Document network designs, configurations, and operational procedures. Automation & Scripting: Leverage automation tools and scripting languages (e.g., Python, Bash, or Terraform) to automate network configuration, provisioning, and monitoring tasks. Support & Maintenance: Provide ongoing support for cloud network infrastructure, including regular updates, patches, and configuration adjustments as needed. Disaster Recovery & Continuity: Ensure that cloud network solutions are resilient and can recover quickly in the event of network failures or disasters, including implementing DR (disaster recovery) strategies for network infrastructure. Must have GCP Cloud Network Architecture most recent experience or hands on at least 5+ Years.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

DevOps Engineer – GCP Automation 4 Yrs Senior DevOps Engineer – GCP Automation (3-Month Contract, Immediate Start) Contract: Time & Material · Remote/India flexible Mission: Build a fully automated GCP provisioning service that launches complete, secure customer environments in ≤ 10 minutes. Core Duties Terraform IaC: Modules for GKE, Cloud Storage, Cloud DNS/SSL, IAM; manage state & Git. Automation: Cloud Build / Cloud Functions workflows; Bash/Python scripts; Helm-based Node.js deployments; auto-SSL. Security: Least-privilege IAM, RBAC, audit logging, monitoring. Integration: Webhook endpoint, status tracking, error handling; docs & runbooks. Must-Have Google Cloud cert (ACE min; DevOps/Architect preferred) – include ID. 3+ yrs GCP, expert Terraform, production GKE, Bash & Python, CI/CD (Cloud Build), strong IAM/security. Self-starter able to deliver solo on tight deadlines. Nice-to-Have REST APIs, multi-tenant design, Node.js, Docker, Helm. Deliverables Month 1: Terraform modules, working prototype, basic security, webhook. Month 2: Prod-ready system (<10 min), full docs, knowledge transfer

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Frontend Engineer (Bangalore) 📍 Location: Bangalore, in-person only 💰 Salary: 20L – 40L annual + 📈 Equity (founding-engineer tracks available) ⌛ Experience : 3+ years in frontend (Backend Experience will be a plus point) 💻 Skills : TypeScript, React, Tanstack, UI Libraries (Tailwind, Shadcn), Testing (Integration and performance optimization), LLM Frameworks(AI SDKs), GCP/AWS/Azure, Websocket, RPCs About the role As a Frontend Engineer at Runable , you’ll play a key role in shaping the user-facing layer of our general automation platform. You’ll work closely with our backend and infra teams to build fast, intuitive, and resilient interfaces that abstract away system complexity and deliver seamless AI-powered automation to our users. What You'll Do LLM & Agent Services: • Build intuitive interfaces to interact with multi-agent workflows using LangChain, LangGraph, OpenAI SDK, etc. • Design frontend components that support real-time AI orchestration , multi-step flows, and streaming responses Frontend Development & UI Engineering: • Develop rich, performant web apps using React , TypeScript , Tailwind , and component libraries like Shadcn • Integrate and support document viewers and editors for Excel , PDF , Markdown , and more • Build cross-platform experiences with React Native + Expo for mobile use cases Cloud & DevOps: • Deploy and manage infrastructure on GCP, AWS, or Azure using Terraform • Author CI/CD pipelines for seamless delivery and rollback Experimental Innovation (15–20% time): • Explore cutting-edge LLM fine-tuning, memory architectures, and new agent frameworks What we are Looking For 3+ years of frontend engineering experience Proficiency in React and TypeScript , plus some backend and infra knowledge Experience integrating and building rich document views — including Excel , PDF , and Markdown , with editing support Exposure to mobile app development using React Native + Expo Hands-on experience with LLM frameworks (LangChain, OpenAI SDK, etc.) and multi-agents Familiarity with popular UI libraries like Tailwind , Shadcn , and state/data tools like TanStack Query Strong UI/UX sensibility with a deep understanding of user behavior , flow design, and intuitive interactions Expertise in networking , load-balancers , and high-performance remote connections Familiarity with Terraform/OpenTofu , CI/CD , and cloud platforms (GCP/AWS/Azure) Excited to work in-person from our Bangalore office in a fast-paced, collaborative environment Job Task (To filter who is not excited enough to try new things, it's a cool task those who loves frontend will love this challenge) https://runable.notion.site/frontend-engineer-task?source=copy_link

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Purpose of the Role We’re looking for a Platform Engineer to lead the design and development of internal self-service workflows and automation for our internal developer platform. This role will: Build reusable workflows using Go, empowering developers to provision infrastructure, deploy applications, manage secrets, and operate at scale without needing to become Kubernetes or cloud experts Drive platform standardization and codification of best practices across cloud infrastructure, Kubernetes, and CI/CD Create developer friendly APIs and experiences while maintaining a high bar for reliability, observability, and performance Design, develop, and maintain Go-based platform tooling and self-service automation that simplifies infrastructure provisioning, application deployment, and service management. Write clean, testable code and workflows that integrate with our internal systems such as GitLab, ArgoCD, Port, AWS, and Kubernetes. Partner with product engineering, SREs, and cloud teams to identify high-leverage platform improvements and enable adoption across brands. Mandatory Skills 4 - 6 years of experience in a professional cloud computing role with Kubernetes, Docker and Infra-as-Code. A BA/BS in Computer Science or equivalent work experience Exposure on Cloud/DevOps/SRE/Platform Engineering roles. Proficient in Golang for backend automation and system tooling. Experience operating in Kubernetes environments and building automation for multi-tenant workloads. Deep experience with AWS (or equivalent cloud provider), infrastructure as code (e.g., Terraform), and CI/CD systems like GitLab CI. Strong understanding of containers, microservice architectures, and modern DevOps practices. Familiarity with GitOps practices using tools like ArgoCD, Helm, and Kustomize. Strong debugging and troubleshooting skills across distributed systems.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Project Manager – IT Infrastructure & DevOps Location: Chennai, India Experience: 8+ Years Employment Type: Full-time Job Summary: We are seeking an experienced and proactive Project Manager to lead IT Infrastructure and DevOps projects. The ideal candidate will be responsible for planning, executing, and closing projects efficiently while managing cross-functional teams. Excellent communication and leadership skills are essential for this role. Key Responsibilities: Manage end-to-end IT infrastructure and DevOps projects, ensuring timely delivery within scope and budget. Coordinate with internal teams, vendors, and stakeholders to define project goals, deliverables, and timelines. Oversee infrastructure upgrades, cloud migrations, server provisioning, network operations, and system integrations. Lead CI/CD pipeline implementation, automation, monitoring, and maintenance initiatives. Identify and mitigate project risks and dependencies. Ensure compliance with IT security policies and industry standards. Maintain comprehensive project documentation and reporting. Communicate project status, escalations, and updates effectively to stakeholders and senior leadership. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. Minimum 8 years of experience in IT Infrastructure and DevOps projects, with at least 3 years in a project management role. Strong knowledge of cloud platforms (AWS, Azure, or GCP), networking, and systems administration. Proven experience managing CI/CD tools (e.g., Jenkins, GitLab, Terraform, Ansible). Proficient in project management tools (e.g., JIRA, MS Project, Asana). PMP / PRINCE2 / Agile certification is a plus. Exceptional communication, leadership, and stakeholder management skills. Ability to work under pressure and adapt to changing priorities. Location Preference: Candidates based in or willing to relocate to Chennai preferred.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Role Overview: We are seeking a highly skilled Backend Developer with 5 + years of experience in backend development. The ideal candidate will have expertise in Java, Python and Amazon Web Services (AWS) . The candidate should have a strong understanding of system architecture, microservices, cloud computing, and best practices for scalable and maintainable applications . They will play a crucial role in designing and implementing complex backend systems while mentoring junior and mid-level developers. Key Responsibilities: Architect, develop, and maintain high-performance, scalable backend systems. Design and implement microservices and distributed systems . Optimize application performance, security, and scalability. Develop and maintain robust APIs, including RESTful APIs . Lead the integration of third-party services and cloud-based solutions. Drive best practices for software engineering, testing, and DevOps automation. Conduct code reviews, provide mentorship, and lead the backend development team. Collaborate with cross-functional teams to define and implement new features. Stay up to date with the latest industry trends, technologies, and best practices. Requirements: Strong expertise in Java and Python for backend development. Proficiency in database management using MySQL and MongoDB . Experience with Redis for caching and performance optimization. Strong understanding of microservices architecture and system design . Experience with containerization (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure) . Deep understanding of API development, authentication mechanisms (OAuth, JWT), and security best practices . Expertise in CI/CD pipelines, DevOps automation, and infrastructure as code (Terraform, Ansible, etc.) . Strong problem-solving skills and the ability to optimize existing codebases. Experience with agile methodologies, Git, and project management tools like Jira . Desired Attributes: Proven leadership and mentoring experience. Ability to analyze and improve system architecture . Strong communication and collaboration skills. Passion for innovation and staying ahead in backend technology trends. Ability to work in a fast-paced environment with tight deadlines.

Posted 1 week ago

Apply

55.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Job Description Must Have: Kubernetes, Terraform, Docker, Jenkins, Pipeline Automation, Good experience in AWS, , Good to Have : Experience in CICD pipeline creation Location: Hyderabad NP: Immediate Joiners Preferred Education: B.sc-IT/B.Tech/M.tech/MCA- Full time Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.

Posted 1 week ago

Apply

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We’re looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities  Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs.  Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions.  Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB.  Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA).  Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config.  Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements.  Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Requirements  7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles.  Strong hands-on expertise with the AWS ecosystem and tools listed above.  Proficiency in scripting (e.g., Python, Bash) and infrastructure automation.  Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate.  Familiarity with data engineering and GenAI workflows is a plus.  AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Purpose The role of a Service Delivery Manager (SDM) in the Global Cloud Excellence Team is to ensure the smooth and efficient delivery of services to clients or customers. SDM is responsible for managing the overall service delivery process, maintaining strong customer relationships, and ensuring that service level agreements (SLAs) are met. You will support as an SDM to lead our 24/7 Cloud Operations Team. The ideal candidate will be responsible for overseeing the daily operations of our cloud infrastructure, software application and platform, ensuring high availability, performance, and security. This role requires strong team management skills, technical expertise in cloud technologies, and a commitment to operational excellence. You will act as the bridge between development and operations by governing implementation of continuous integration and continuous deployment (CI/CD) pipelines, optimizing cloud infrastructure, and enhancing system performance and security towards achieving larger organizational objectives to facilitate seamless collaboration between development and operations teams to enhance the speed and quality of software delivery and its operations. Reporting Manager: Head of ZDP India This a manager role Roles & Responsibilities Team Leadership: Manage and mentor a team of cloud operations engineers and support staff. Foster a culture of collaboration, continuous improvement, and accountability within the team. Operational Oversight: Ensure the 24/7 availability of cloud services, platform and infrastructure. Monitor system performance and implement proactive measures to prevent downtime. Develop and enforce operational policies and procedures to enhance service delivery. Incident Management: Lead incident response efforts, ensuring timely resolution of issues and minimizing impact on services. Conduct post-incident reviews to identify root causes and implement corrective actions. Enable ITIL Process Capacity Planning: Analyze current and future capacity needs to ensure optimal resource allocation. Collaborate with operations teams to plan and execute cloud infrastructure upgrades and expansions. Performance Metrics: Define and track key performance indicators (KPIs) for cloud operations. Prepare regular reports for senior management on operational performance and service levels. SLA Management: Ensures compliance with the agreed SLA Determines demands for IT services 1st escalation instance for Customer regarding Service Operation ensures compliance and continuous improvement of the agreed processes creates the monthly service level reports with the status of the supported services and agreed KPIs Conducting regular service review meetings with the customer Collaboration: Work closely with development, operations team, security, and product teams to align operations with business objectives. Participate in cross-functional projects to improve overall service delivery and customer satisfaction . Budget Management: Assist in the development and management of the operations budget. Identify cost-saving opportunities while maintaining service quality. Qualifications & Work Experience: Education: Bachelor’s degree in computer science, Information Technology, or a related field. Experience: 8 -12 years of experience in cloud operations, IT operations, or a related field. Proven experience in managing 24/7 operations teams. Willing to provide on-call support as and when needed. Technical Skills: Strong knowledge of cloud platforms (e.g., Azure, AWS and Google Cloud). Familiarity with infrastructure as code (IaC) tools and practices (e.g., Terraform, Bicep, ARM etc). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Datadog etc). Experience with Ticketing tools (e.g., ServiceNow, JIRA, ADO etc) Soft Skills: Excellent team management skills. A strong focus on customers and results Strong problem-solving abilities and attention to detail. Effective communication skills, both verbal and written. ZEISS in India ZEISS in India is headquartered in Bengaluru and present in the fields of Industrial Quality Solutions, Research Microscopy Solutions, Medical Technology, Vision Care and Sports & Cine Optics. ZEISS India has 3 production facilities, R&D center, Global IT services and about 40 Sales & Service offices in almost all Tier I and Tier II cities in India. With 2200+ employees and continued investments over 25 years in India, ZEISS’ success story in India is continuing at a rapid pace. Further information at ZEISS India (https://www.zeiss.co.in/corporate/home.html)

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a visionary AI Architect to lead the design and integration of cutting-edge AI systems, including Generative AI , Large Language Models (LLMs) , multi-agent orchestration , and retrieval-augmented generation (RAG) frameworks. This role demands a strong technical foundation in machine learning, deep learning, and AI infrastructure, along with hands-on experience in building scalable, production-grade AI systems on the cloud. The ideal candidate combines architectural leadership with hands-on proficiency in modern AI frameworks, and can translate complex business goals into innovative, AI-driven technical solutions. Primary Stack & Tools: Languages : Python, SQL, Bash ML/AI Frameworks : PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers GenAI & LLM Tooling : OpenAI APIs, LangChain, LlamaIndex, Cohere, Claude, Azure OpenAI Agentic & Multi-Agent Frameworks : LangGraph, CrewAI, Agno, AutoGen Search & Retrieval : FAISS, Pinecone, Weaviate, Elasticsearch Cloud Platforms : AWS, GCP, Azure (preferred: Vertex AI, SageMaker, Bedrock) MLOps & DevOps : MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines, Terraform, FAST API Data Tools : Snowflake, BigQuery, Spark, Airflow Key Responsibilities: Architect scalable and secure AI systems leveraging LLMs , GenAI , and multi-agent frameworks to support diverse enterprise use cases (e.g., automation, personalization, intelligent search). Design and oversee implementation of retrieval-augmented generation (RAG) pipelines integrating vector databases, LLMs, and proprietary knowledge bases. Build robust agentic workflows using tools like LangGraph , CrewAI , or Agno , enabling autonomous task execution, planning, memory, and tool use. Collaborate with product, engineering, and data teams to translate business requirements into architectural blueprints and technical roadmaps. Define and enforce AI/ML infrastructure best practices , including security, scalability, observability, and model governance. Manage technical road-map, sprint cadence, and 3–5 AI engineers; coach on best practices. Lead AI solution design reviews and ensure alignment with compliance, ethics, and responsible AI standards. Evaluate emerging GenAI & agentic tools; run proofs-of-concept and guide build-vs-buy decisions. Qualifications: 10+ years of experience in AI/ML engineering or data science, with 3+ years in AI architecture or system design. Proven experience designing and deploying LLM-based solutions at scale, including fine-tuning , prompt engineering , and RAG-based systems . Strong understanding of agentic AI design principles , multi-agent orchestration , and tool-augmented LLMs . Proficiency with cloud-native ML/AI services and infrastructure design across AWS, GCP, or Azure. Deep expertise in model lifecycle management, MLOps, and deployment workflows (batch, real-time, streaming). Familiarity with data governance , AI ethics , and security considerations in production-grade systems. Excellent communication and leadership skills, with the ability to influence technical and business stakeholders.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Company Description North Hires is a premier consulting firm specializing in Custom Software Development, Recruitment, Sourcing, and Executive Search services. Our team of experienced professionals delivers exceptional recruitment solutions tailored to each client's unique needs. We provide various services like Custom Software Development, Recruitment Process Outsourcing, Virtual Employees/Agents, and Digital Marketing Solutions to empower businesses to thrive. Role Description This is a full-time remote role for an AWS Cloud Manager at North Hires. The AWS Cloud Manager will be responsible for managing the AWS cloud infrastructure, providing technical support, troubleshooting system issues, and leading and managing a team. This role is located in Hyderabad with the option for some work from home flexibility. Qualifications Information Technology and Technical Support skills Troubleshooting expertise Team Leadership and Team Management abilities Experience with AWS and cloud technologies Strong problem-solving skills Excellent communication and interpersonal skills Bachelor's degree in Computer Science or related field Extensive experience with AWS tool kit Kubernetes, Docker and Terraform technology

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have hands-on experience in Site Reliability and DevOps, along with expertise in Kubernetes, Docker, Terraform, and CI/CD. As a Level M professional, you will be working in US EST hours with Pune being the preferred location. Your responsibilities will include designing, developing, and deploying software systems and infrastructure to enhance reliability, scalability, and performance. You will be expected to identify manual processes that can be automated to improve operational efficiency. Implementing monitoring and alerting systems to proactively identify and address issues will be a key part of your role. Collaborating with customers for architecture reviews and developing new features to enhance the reliability and scalability of the platform will also be part of your duties. Working closely with various application teams to understand platform issues and design solutions for monitoring and issue resolution will be essential. You will be responsible for designing recovery and resiliency strategies for different applications. Identifying opportunities for technological improvements and the need for new tools to support capacity planning, disaster recovery, and resiliency will also be part of your role. Additionally, you will architect and implement packages/modules that can serve as blueprints for implementation by different application teams.,

Posted 1 week ago

Apply

2.0 years

5 - 18 Lacs

Udaipur, Rajasthan

On-site

Job Title: DevOps Engineer (AWS/Azure) Location: Udaipur, Rajasthan Employment Type: Full-time Job Summary: We are seeking a skilled and proactive DevOps Engineer with hands-on experience in AWS and/or Azure to join our team. In this role, you will design, implement, and maintain scalable, secure, and highly available cloud infrastructure and CI/CD pipelines, enabling rapid and reliable software delivery. Key Responsibilities: Develop, maintain, and improve CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps. Deploy and manage infrastructure using Infrastructure as Code (IaC) tools such as Terraform, AWS Cloud Formation, or Azure Bicep/ARM templates. Automate system configuration and management using tools like Ansible, Chef, or Puppet. Monitor, troubleshoot, and optimize infrastructure performance, using tools like Cloud Watch, Azure Monitor, Prometheus, and Grafana. Implement robust security, compliance, and backup strategies across cloud infrastructure. Collaborate with development teams to ensure smooth and efficient software delivery workflows. Manage containerized applications using Docker and orchestrate them with Kubernetes (EKS/AKS). Ensure high availability and disaster recovery in production environments. Stay up to date with the latest DevOps trends, tools, and best practices. Required Skills & Qualifications: 2+ years of experience as a DevOps Engineer or in a similar role Strong understanding of Linux/Unix systems and scripting Experience with containerization and container orchestration Expertise in one or more IaC tools: Terraform, Cloud Formation, or ARM/Bicep Knowledge of CI/CD pipelines and automation frameworks Familiarity with Git, GitOps, and version control workflows Job Type: Full-time Pay: ₹500,000.00 - ₹1,800,000.00 per year Schedule: Morning shift Location: Udaipur City, Rajasthan (Preferred) Work Location: In person

Posted 1 week ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Summary We are seeking a highly experienced Lead DevOps Engineer to drive the strategy, design, and implementation of our DevOps infrastructure across cloud and on-premises environments. This role requires strong leadership and hands-on expertise in AWS, Azure DevOps, and Google Cloud Platform (GCP), along with deep experience in automation, CI/CD, container orchestration, and system scalability. As a technical leader, you will mentor DevOps engineers, collaborate with cross-functional teams, and establish best practices to ensure reliable, secure, and scalable infrastructure that supports our product lifecycle. Key Responsibilities: Oversee the design, implementation, and maintenance of scalable and secure infrastructure on cloud and on-premises environments cost effectively Implement and manage infrastructure as code (IaC) using tools like Terraform or CloudFormation Manage and optimize CI/CD pipelines to accelerate development cycles and ensure seamless deployments. Implement robust monitoring solutions to proactively identify and resolve issues. Lead incident response efforts to minimize downtime and impact on clients. Develop and implement automation strategies to streamline deployment, monitoring, and maintenance processes. Mentor and guide junior/mid-level DevOps engineers, fostering a culture of learning and accountability. Collaborate with software developers, quality assurance engineers and IT professionals to guarantee smooth deployment, automation and management of software infrastructure. Ensure high standards for security, compliance, and data protection across the infrastructure. Stay up to date with industry trends and emerging technologies, assessing their potential impact and recommending adoption where appropriate. Maintain comprehensive documentation of systems, processes, and procedures to support knowledge sharing and team efficiency. Required Skills and Qualifications 6+ years of hands-on experience in DevOps, infrastructure, or related roles Strong knowledge of cloud platforms including Azure, AWS and GCP Proven experience in containerization using Docker and Kubernetes. Advanced knowledge of Linux systems and networking Strong experience with CI/CD tools like Jenkins, GitHub Actions, Bitbucket Pipeline, TeamCity Solid experience in designing, implementing, and maintaining CI/CD pipelines for automated build, test, and deployment processes Deep understanding of automation, scripting, and Infrastructure as Code (IaC) with Terraform and Ansible. Strong problem-solving and troubleshooting skills, with the ability to identify root causes and implement effective solutions. Excellent leadership, team building, and communication skills Bachelor’s degree in computer science, IT, Engineering, or equivalent practical experience Preferred Skills and Qualifications Relevant certifications (e.g., AWS Certified DevOps Engineer – Professional, GCP DevOps Engineer, or Azure Solutions Architect) Experience working in fast-paced product environments Knowledge of security best practices and compliance standards Key Competencies Leadership and mentoring capabilities in technical teams Strong strategic thinking and decision-making skills Ability to manage multiple priorities in a deadline-driven environment Passion for innovation, automation, and continuous improvement Clear, proactive communication and collaboration across teams Why Join Us At Admaren, we are transforming the maritime domain with state-of-the-art technology. As a Lead DevOps Engineer , you will be at the helm of infrastructure innovation, driving mission-critical systems that support global operations. You’ll have the autonomy to implement cutting-edge practices, influence engineering culture, and grow with a team committed to excellence. Join us to lead from the front and shape the future of maritime software systems.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Technology: Cloud Infrastructure Engineer with Azure, Kubernetes, Terraform Experience:5+ Years Location: 100% Remote Duration: 6 months Cost: 80K per month Working Time: - 4.30 PM to 12:30 AM IST OR 7:30 PM to 3.30 AM IST PRIMARY SKILLS • 5+ years of experience in cloud engineering, infrastructure architecture, or platform engineering roles. • Experience with Kubernetes operations and architecture in production environments. • Strong knowledge of cloud IaaS and PaaS services, and how to design reliable solutions leveraging them (e.g., VMs, load balancers, managed databases, identity platforms, messaging queues, etc.). • Advanced proficiency in Terraform and Git-based infrastructure workflows. • Experience building and maintaining CI/CD pipelines. • Solid scripting abilities in Python, Bash, or PowerShell. • A strong understanding of infrastructure security, governance, and identity best practices. • Ability to work collaboratively across engineering. SECONDARY SKILLS (IF ANY) • Familiarity with GitOps tooling. • Experience with policy-as-code and container security best practices. • Experience with Microsoft Power Platform (Dynamics 365) • Google Cloud Knowledge

Posted 1 week ago

Apply

0.0 - 5.0 years

10 - 16 Lacs

Chennai, Tamil Nadu

On-site

Position Overview: Cloud Platform Engineer will be responsible for developing and maintaining Terraform modules and patterns for AWS and Azure. These modules and patterns will be used for platform landing zones, application landing zones, and application infrastructure deployments. The role involves managing the lifecycle of these patterns, including releases, bug fixes, feature integrations, and updates to test cases. Key Responsibilities: Develop and release Terraform modules, landing zones, and patterns for AWS and Azure. Provide lifecycle support for patterns, including bug fixing and maintenance. Integrate new features into existing patterns to enhance functionality. Release updated and new patterns to ensure they meet current requirements. Update and maintain test cases for patterns to ensure reliability and performance. Qualifications: · 5+ years of AWS/Azure cloud migration experience. · Proficiency in Cloud compute (EC2, EKS, Azure VM, AKS) and Storage (s3, EBS,EFS, Azure Blob, Azure Managed Disks, Azure Files). · Strong knowledge of AWS and Azure cloud services. · Expert in terraform. · AWS/Azure certification preferred. Mandatory Skills: Cloud AWS DevOps .(Migration Exp min 5 Years) Rel Experience: 5-8 Years . Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹1,036,004.19 - ₹1,677,326.17 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Yearly bonus Experience: DevOps: 10 years (Required) AWS: 5 years (Required) Azure: 5 years (Required) Terraform: 5 years (Required) System migration: 5 years (Required) Location: Chennai, Tamil Nadu (Preferred)

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Quant Engineer Location: Remote Quant Engineer Job Description: Strong Python developer with up-to-date skills, including web development, cloud (ideally Azure), Docker, testing , devops (ideally terraform + github actions). Data engineering (pyspark, lakehouses, kafka) is a plus. Good understanding of maths, finance as role interacts with quant devs, analysts and traders. Familiarity with e.g. PnL, greeks, volatility, partial derivative, normal distribution etc. Financial and/or trading exposure is nice to have, particularly energy commodities Productionise quant models into software applications, ensuring robust day to day operation, monitoring and back testing are in place Translate trader or quant analyst’s need into software product requirements Prototype and implement data pipelines Co-ordinate closely with analysts and quants during development of models, acting as a technical support and coach Produce accurate, performant, scalable, secure software, and support best practices following defined IT standards Transform proof of concepts into a larger deployable product in Shell and outside. Work in a highly-collaborative, friendly Agile environment, participate in Ceremonies and Continuous Improvement activities. Ensuring that documentation and explanations of results of analysis or modelling are fit for purpose for both a technical and non-technical audience Mentor and coach other teammates who are upskilling in Quants Engineering Professional Qualifications & Skills Educational Qualification Graduation / postgraduation /PhD with 8+ years’ work experience as software developer /data scientist. Degree level in STEM, computer science, engineering, mathematics, or a relevant field of applied mathematics. Good understanding of Trading terminology and concepts (incl. financial derivatives), gained from experience working in a Trading or Finance environment. Required Skills Expert in core Python with Python scientific stack / ecosystem (incl pandas, numpy, scipy, stats), and a second strongly typed language (e.g.: C#, C++, Rust or Java). Expert in application design, security, release, testing and packaging. Mastery of SQL / no-SQL databases, data pipeline orchestration tools. Mastery of concurrent/distributed programming and performance optimisation methods

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies