Jobs
Interviews

581 Github Actions Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant- Python Full Stack Developers We are looking for a talented and motivated Python Developer to join our development team. The ideal candidate will be responsible for building high-quality applications, writing clean and efficient code, and collaborating with cross-functional teams to develop innovative software solutions. Responsibilities . Proficiency with server-side languages such as Python, GoLang, and web frameworks. . Experience with database technology such as MongoDB, Redis and PostgreSQL . Proficiency with fundamental front-end languages such as HTML, CSS, and Python . Proficiency with Django framework. . Familiarity with JavaScript frameworks such as Angular JS, React . Proficiency with Container technologies like Docker . Experience in containerizing Django application with databases (MongoDB and PostgreSQL) . Exposure to CICD framework is added advantage . Familiarity with running and orchestrating Docker images with Kubernetes . Understanding of technical debt Qualifications we seek in you! Minimum Qualifications . Bachelor&rsquos degree Preferred Qualifications/ Skills Experience with Docker, Kubernetes, or cloud services (AWS, GCP, Azure). Familiarity with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI. Exposure to data processing libraries (e.g., Pandas, NumPy). Understanding of asynchronous programming (asyncio, Celery). Experience with unit testing frameworks (pytest, unittest). Interest or experience in machine learning or data engineering. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

2.0 - 4.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Job Skills: Strong proficiency in React.js and TypeScript. Experience integrating frontend applications with JavaScript SDKs. Knowledge of React Query to enhance SDK-based data fetching and state synchronization. Understanding of state management using Zustand or local component state. Proficiency in building real-time data dashboards and network security visualizations. Familiarity with performance optimization techniques (React Profiler, lazy loading, memoization). Knowledge of *frontend security best practices. Ability to work with *Git, GitHub Actions, CI/CD Preferred Qualifications Experience in cybersecurity, network security, or data visualization. Prior work on real-time data dashboards or security monitoring tools. Familiarity with UI/UX design tools (Figma or Zeplin). Knowledge of Storybook for UI documentation and component testing Understanding of WebSockets or real-time event-driven UI updates Role: We are looking for a UI Engineer to build and optimize the frontend for our API-driven cybersecurity platform You will work with a JavaScript SDK to consume APIs, ensuring seamless integration, real-time security dashboards, and user-friendly interfaces We prioritize simplicity, speed, and efficiency, leveraging React.js, out-of-the-box UI solutions like Material-UI, and modern visualization tools to keep development agile and competitive. Since our backend services are exposed via an SDK, you will focus entirely on UI development, integrating the SDK efficiently and ensuring optimal frontend performance. Responsibilities: Develop and maintain React.js applications consuming APIs via a JavaScript SDK Build data-heavy security dashboards with interactive visualizations (Recharts or ECharts). Use React Query to enhance SDK-based data fetching where needed. Implement state management using Zustand or local component state when necessary. Optimize UI for performance, accessibility, and responsiveness Develop WebSockets or SDK event listeners for real-time updates. Write unit and integration tests using Jest, Cypress, and React Testing Library. Collaborate with backend engineers to improve SDK usability and frontend integration. design develop and test APIs for the UI using necessary technologies including but not limited to GraphQL, NodeJS and/or Java Spring Participate in code reviews, security audits, and UI performance testing. Preferred Qualifications Experience in cybersecurity, network security, or data visualization. Prior work on real-time data dashboards or security monitoring tools. Familiarity with UI/UX design tools (Figma or Zeplin). Knowledge of Storybook for UI documentation and component testing Understanding of WebSockets or real-time event-driven UI updates

Posted 1 month ago

Apply

6.0 - 9.0 years

18 - 20 Lacs

Pune

Work from Office

Notice Period: (Immediate Joiner - Only) Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM 9:30 PM IST About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities: Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience: 5-10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer. Ability to deal with ambiguity. Manage a collaborative and analytical approach. Self-confident and humble. Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

What you will do In this vital role you will maximises domain and business process expertise to detail product requirements as epics and user stories, along with supporting artifacts like business process maps, use cases, and test plans for the software development teams. This role involves working closely with business collaborators, Data engineers, AI/ML engineers to ensure that the technical requirements for upcoming development are thoroughly elaborated. This enables the delivery team to estimate, plan, and commit to delivery with high confidence and identify test cases and scenarios to ensure the quality and performance of IT Systems. You will collaborate with the Product Owner and developers to maintain an efficient and consistent process, ensuring quality deliverables from the team. Roles & Responsibilities: Collaborate with System Architects and Product owners to manage business analysis activities, ensuring alignment with engineering and product goals. Monitor, solve, and resolve issues related to case intake and case processing across multiple systems. Work with Product Owners and customers to define scope and value for new developments. Stay focused on software development to ensure it meets requirements, providing proactive feedback to collaborators. Design, implement, and maintain automated CI/CD pipelines for seamless software integration and deployment. Collaborate with developers to enhance application reliability and scalability. Troubleshoot deployment and infrastructure issues, ensuring high availability. Collaborate with business subject matter experts, testing teams and Product Management to prioritize release scopes and groom the Product backlog. Maintain and ensure the quality of documented user stories/requirements in tools like Jira. Basic Qualifications: Masters degree and 1 to 3 years of Life Science/Biotechnology/Pharmacology/Information Systems experience OR Bachelors degree and 3 to 5 years of Life Science/Biotechnology/Pharmacology/Information Systems experience OR Diploma and 7 to 9 years of Life Science/Biotechnology/Pharmacology/Information Systems experience Preferred Qualifications: Functional Skills: Must-Have Experienced in MuleSoft, Java, J2ee & database programming. Demonstrated expertise in monitoring, troubleshooting, and resolving data and system issues. Proficiency in CI/CD tools (Jenkins, GitLab CI/CD, GitHub Actions, or Azure DevOps). Hands-on experience with the ITIL framework and methodologies like (Scrum). Knowledge of SDLC process, including requirements, design, testing, data analysis, change control Functional Skills: Good to Have Experience in managing GxP systems and implementing GxP projects. Knowledge of Artificial Intelligence (AI), Robotic Process Automation (RPA), Machine Learning (ML), Natural Language Processing (NLP) and Natural Language Generation (NLG) automation technologies with building business requirements. Knowledge of cloud technologies such as AWS. Excellent communication skills and the ability to communicate with Product Managers and business collaborators to define scope and value for new developments. Experience of DevOps, Continuous Integration, and Continuous Delivery methodology, and CRM systems Soft Skills: Excellent analytical and troubleshooting skills Able to work under minimal supervision Strong verbal and written communication skills High degree of initiative and self-motivation Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Ability to deal with ambiguity and think on their feet Shift Information: This position may require you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements.

Posted 1 month ago

Apply

7.0 - 12.0 years

18 - 30 Lacs

South Goa, Pune

Hybrid

We are looking for a DevOps leader with deep experience in python build/ci/cd ecosystem for an exciting and cutting edge stealth startup in Silicon Valley. Responsibilities: Design and implement complex CI/CD pipelines in python and leveraging cutting-edge python packaging, dependency management, and CI/CD practices Optimize speed and reliability of builds Define test automation tools, architecture and integration with CI/CD platforms, and drive TA implementation in python Implement configuration management to set standards and best practices Manage and optimize cloud infrastructure resources: GCP or AWS or Azure Collaborate with development teams to understand application requirements and optimize deployment processes Work closely with operations teams to ensure smooth transition of applications into production. Develop and maintain documentation for system configurations, processes, and procedures Eligibility: 5-12 years of experience in DevOps, with minimum 2-5years of experience on python build echo system Python packaging, distribution,concurrent builds, dependencies, environments, test framework integrations, linting. Pip, poetry, uv, flint CI/CD: pylint, coverage.py , cprofile, python scripting, docker, k8s, IaC (Terraform, Ansible, Puppet, Helm) Platforms (Teamcity (preferred) or Jenkins or Github Actions or CircleCI, TravisCI) Test Automation: pytest, unittest, integration tests, plyright (preferred) Cloud platforms: AWS or Azure or GCP and platform specific CI/CD services and tools. Familiarity with logging and monitoring tools (e.g. Prometheus, Grafana).

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

About the Role: Grade Level (for internal use): 11 S&P Global Mobility The Role: Senior Data Engineer( AWS Cloud, Python) We are seeking a Senior Data Engineer with deep expertise in AWS Cloud Development to join our fast-paced data engineering organization. This role is critical to both the development of new data products and the modernization of existing platforms. The ideal candidate is a seasoned data engineer with hands-on experience designing, building, and optimizing large-scale data pipelines and architectures in both on-premises (e.g., Oracle) and cloud environments (especially AWS). This individual will also serve as a Cloud Development expert , mentoring and guiding other data engineers as they enhance their cloud skillsets. Responsibilities Data Engineering & Architecture Design, build, and maintain scalable data pipelines and data products. Develop and optimize ELT/ETL processes using a variety of data tools and technologies. Support and evolve data models that drive operational and analytical workloads. Modernize legacy Oracle-based systems and migrate workloads to cloud-native platforms. Cloud Development & DevOps (AWS-Focused) Build, deploy, and manage cloud-native data solutions using AWS services (e.g., S3, Lambda, Glue, EMR, Redshift, Athena, Step Functions). Implement CI/CD pipelines, IaC (e.g., Terraform or CloudFormation), and monitor cloud infrastructure for performance and cost optimization. Ensure data platform security, scalability, and resilience in the AWS cloud. Technical Leadership & Mentoring Act as a subject matter expert on cloud-based data development and DevOps best practices. Mentor data engineers on AWS architecture, infrastructure as code, and cloud-first design patterns. Participate in code and architecture reviews, enforcing best practices and high-quality standards. Cross-functional Collaboration Work closely with product managers, data analysts, software engineers, and other stakeholders to understand business needs and deliver end-to-end solutions. Support and evolve the roadmap for data platform modernization and new product delivery. What We're looking for: Required Qualifications 7+ years of experience in data engineering or equivalent technical role. 5+ years of hands-on experience with AWS Cloud Development and DevOps. Strong expertise in SQL , data modeling , and ETL/ELT pipelines . Deep experience with Oracle (PL/SQL, performance tuning, data extraction). Proficiency in Python and/or Scala for data processing tasks. Strong knowledge of cloud infrastructure (networking, security, cost optimization). Experience with infrastructure as code (Terraform). Familiarity with CI/CD pipelines and DevOps tooling (e.g., Jenkins, GitHub Actions). Preferred (Nice to Have) Experience with Google Cloud Platform (GCP), Snowflake Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Experience with modern orchestration tools (e.g., Airflow, dbt). Exposure to data cataloging, governance, and quality tools.

Posted 1 month ago

Apply

2.0 - 4.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Role Overview: We are looking for a passionate and driven Software Engineer with 2-4 years of experience in Python Java. The ideal candidate will have hands-on experience in containerization, cloud platforms (AWS or GCP), microservices architecture, and be proficient debugging. You will work as part of an agile development team to build and maintain scalable, high-performance systems. Strong team collaboration skills are crucial to success in this role. Key Responsibilities: Develop and maintain robust backend services and applications using Python Java . Work with microservices architecture to design, implement, and deploy scalable solutions. Exposure to containerization using Docker and work with Kubernetes for orchestration and deployment. Hands on AWS or Google Cloud Platform (GCP) , utilizing cloud-native services and resources. Troubleshoot, debug, and optimize application code and systems for performance and reliability. Write clean, maintainable, and efficient code, following industry best practices. Required Qualifications: 2-4 years of professional experience in Python and Java development. Familiarity with containerization technologies (e.g., Docker ) and orchestration tools like Kubernetes . Experience with deploying and managing applications on AWS or Google Cloud Platform (GCP) . Understanding of microservices architecture and how to build and maintain distributed systems. Strong debugging skills and the ability to solve complex technical issues in large systems. Experience with version control tools (e.g., Git ) and CI/CD tools (e.g., Jenkins , GitLab CI , GitHub Actions ). Knowledge of RESTful APIs and how to build scalable backend services. Strong communication skills and ability to collaborate in a team environment. Ability to adapt to changing requirements and contribute in an Agile, fast-paced development environment.

Posted 2 months ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Role Overview: Hiring a Principal Engineer to join our Engineering Excellence organization, reporting to the VP of Engineering Excellence. In this strategic enabler role, youll drive the evolution of our developer toolchains, build systems, and CI/CD pipelinesempowering eng About the Role Hiring a Principal Engineer to join our Engineering Excellence organization, reporting to the VP of Engineering Excellence . In this strategic enabler role , youll drive the evolution of our developer toolchains, build systems, and CI/CD pipelinesempowering engineering teams to deliver high-quality software faster and more efficiently. Rather than directly executing all initiatives, youll influence, align, and coordinate cross-functional teams to adopt best practices, modern automation solutions, and scalable workflows. Your impact will be measured by how effectively you enable teams to deliver , not by doing it all yourself. You must have the technical skills and experience to evaluate how we operate today, assess and recommend tooling choices, and coach teams on good practice. Youll partner with DevOps teams, engineering teams, engineering leaders, architects, and security to streamline the end-to-end development experiencefrom local dev environments through to production delivery. Youll identify friction points, guide toolchain modernization, and drive adoption of shared frameworks and standards. Responsibilities: Define and promote a company-wide strategy for developer tooling and CI/CD. Influence and coordinate engineering teams to align on scalable, secure development workflows. Identify bottlenecks in the developer experience and lead initiatives to address them. Drive adoption of modern, automation-first practices across build, test, and release. Partner with DevOps teams to deliver reliable, self-service tooling. Foster a culture of engineering excellence through mentorship, guidance, and collaboration. Requirements: Proven experience with developer tooling, CI/CD pipelines, and build automation at scale. Strong influencing and communication skills across technical and non-technical stakeholders. Hands-on knowledge of tools like GitHub Actions, Jenkins, ArgoCD, Gradle, or Maven. Passion for improving developer productivity, consistency, and release velocity. Systems-thinking mindset with a focus on security, scalability, and maintainability.

Posted 2 months ago

Apply

7.0 - 11.0 years

0 - 1 Lacs

Pune, Ahmedabad, Delhi / NCR

Work from Office

Job Title: DevOps Engineer Experience: 7+ Years Location: Remote Notice Period: Immediate Worker Type: C2C / Full-Time (specify based on engagement) Mandatory Skills: DevOps, Azure DevOps, Terraform Key Responsibilities: Design, implement, and manage scalable CI/CD pipelines using Azure DevOps and GitHub Actions . Automate infrastructure provisioning using Terraform and configuration management tools like Ansible . Manage containerized applications in Kubernetes and Docker environments. Integrate and manage test automation frameworks (e.g., Selenium, JUnit) to uphold software quality. Embed vulnerability scanning tools such as SonarQube and OWASP ZAP into the pipeline for secure deployments. Monitor system health, performance, and proactively troubleshoot production issues. Collaborate with cross-functional teams to continuously improve DevOps processes and delivery pipelines. Participate in on-call support rotations and respond to incidents effectively. Stay current with DevOps trends , tools, and technologies to introduce process optimizations. Required Skills & Qualifications: Minimum 7 years of experience in DevOps or related roles. Strong scripting skills using Bash, PowerShell, or Python . Proficient in Azure Cloud Services and infrastructure-as-code (IaC) using Terraform . Expertise in containerization (Docker) and orchestration platforms (Kubernetes). Solid experience in Azure DevOps , GitHub Actions , and CI/CD best practices. Hands-on experience with tools like Ansible , Puppet , or Chef . Familiarity with automated testing tools and test integration in pipelines. Exposure to DevSecOps practices and tools for vulnerability scanning and compliance. Excellent problem-solving abilities and communication skills. Preferred Qualifications: Prior experience integrating test automation into CI/CD pipelines. Strong grasp of DevOps security best practices and cloud-native security principles. Azure certifications or equivalent cloud platform credentials are a plus. Interested candidates can apply at: B.Simran@ekloudservices.com Note: Immediate joiners preferred. Profiles with relevant DevOps + Azure + Terraform experience will be shortlisted on priority.

Posted 2 months ago

Apply

6.0 - 11.0 years

20 - 25 Lacs

Hyderabad, Bengaluru

Hybrid

We are seeking an experienced Sr. Azure GitHub DevOps Engineer to join our team, supporting a global enterprise client. In this role, you will be responsible for designing and optimizing DevOps pipelines, leveraging GitHub Actions and Azure DevOps tools to streamline software delivery and infrastructure automation. This role requires expertise in GitHub Actions, Azure-native services, and modern DevOps methodologies to enable seamless collaboration and ensure scalable, secure, and efficient cloud-based solutions. Key Responsibilities GitHub Actions Development: Design, implement, and optimize CI/CD workflows using GitHub Actions to support multi-environment deployments. Leverage GitHub Actions for automated builds, tests, and deployments, ensuring integration with Azure services. Create reusable GitHub Actions templates and libraries for consistent DevOps practices. GitHub Repository Administration: Manage GitHub repositories, branching strategies, and access permissions. Implement GitHub features like Dependabot, code scanning, and security alerts to enhance code quality and security. Azure DevOps Integration: Utilize Azure Pipelines in conjunction with GitHub Actions to orchestrate complex CI/CD workflows. Configure and manage Azure services such as: Azure Kubernetes Service (AKS) for container orchestration. Azure Application Gateway and Azure Front Door for load balancing and traffic management. Azure Monitoring , Azure App Insights , and Azure KeyVault for observability, diagnostics, and secure secrets management. HELM charts and Microsoft Bicep for Infrastructure as Code. Automation & Scripting: Develop robust automation scripts using PowerShell , Bash , or Python to streamline operational tasks. Automate monitoring, deployments, and environment management workflows. Infrastructure Management: Oversee and maintain cloud environments with a focus on scalability, security, and reliability. Implement containerization strategies using Docker and orchestration via AKS . Collaboration: Partner with cross-functional teams to align DevOps practices with business objectives while maintaining compliance and security standards. Monitoring & Optimization: Deploy and maintain monitoring and logging tools to ensure system performance and uptime. Optimize pipeline execution times and infrastructure costs. Documentation & Best Practices: Document GitHub Actions workflows, CI/CD pipelines, and Azure infrastructure configurations. Advocate for best practices in version control, security, and DevOps methodologies. Qualifications Education: Bachelor's degree in Computer Science, Information Technology, or related field (preferred). Experience: 3+ years of experience in DevOps engineering with a focus on GitHub Actions and Azure DevOps tools. Proven track record of designing CI/CD workflows using GitHub Actions in production environments. Extensive experience with Azure services, including AKS, Azure Front Door, Azure Application Gateway, Azure KeyVault, Azure App Insights, and Azure Monitoring. Hands-on experience with Infrastructure as Code tools, including Microsoft Bicep and HELM charts . Technical Skills: GitHub Actions Expertise: Deep understanding of GitHub Actions, workflows, and integrations with Azure services. Scripting & Automation: Proficiency in PowerShell , Bash , and Python for creating automation scripts and custom GitHub Actions. Containerization & Orchestration: Experience with Docker and Kubernetes , including Azure Kubernetes Service (AKS). Security Best Practices: Familiarity with securing CI/CD pipelines, secrets management, and cloud environments. Monitoring & Optimization: Hands-on experience with Azure Monitoring, App Insights, and logging solutions to ensure system reliability. Soft Skills: Strong problem-solving and analytical abilities. Excellent communication and collaboration skills, with the ability to work in cross-functional and global teams. Detail-oriented with a commitment to delivering high-quality results. Preferred Qualifications Experience in DevOps practices within the financial or tax services industries. Familiarity with advanced GitHub features such as Dependabot, Security Alerts, and CodeQL. Knowledge of additional CI/CD platforms like Jenkins or CircleCI.

Posted 2 months ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

We are seeking a Senior Python Developer with strong experience in AWS, Terraform, and automation frameworks. The ideal candidate will be responsible for building and integrating tools, utilities, and test automation processes across cloud and enterprise systems, including Salesforce, Dell Boomi, and Snowflake. Key Responsibilities: Design, develop, and maintain Python-based tools and services for automation and integration. Develop and manage infrastructure using Terraform and deploy resources on AWS. Automate internal backend processes including EDI document generation and Salesforce data cleanup. Integrate test automation frameworks with AWS services like Lambda, API Gateway, CloudWatch, and more. Implement and maintain automated test cases using Cucumber, Gherkin, and Postman. Collaborate with QA and DevOps teams to improve testing coverage and CI/CD automation. Work with tools such as Jira, X-Ray, and GitHub Actions for test tracking and version control. Develop utilities for integrations between Salesforce, Boomi, AWS, and Snowflake. Must-Have Qualifications: 5 to 7 years of hands-on experience in software development or test automation. Strong programming skills in Python. Solid experience working with AWS services (Lambda, API Gateway, CloudWatch, etc.). Proficiency with Terraform for managing infrastructure as code. Experience with REST API development and integration. Experience with Dell Boomi, Salesforce and SOQL. Knowledge of SQL (preferably with platforms like Snowflake). Knowledge of EDI formats and automation. Nice-to-Have Skills: Experience in BDD tools like Cucumber, Gherkin. Test management/reporting with X-Ray, integration with Jira. Exposure to version control and CI/CD workflows (e.g., GitHub, GitHub Actions). Tools & Technologies: Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, etc.) IaC: Terraform Automation/Testing: Cucumber, Gherkin, Postman Data & Integration: Snowflake, Salesforce, Dell Boomi DevOps: Git, GitHub Actions Tracking: Jira, X-Ray Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 2 months ago

Apply

3.0 - 6.0 years

5 - 7 Lacs

Chennai

Work from Office

Location: Chennai, India (on-site at our new HRGF office) About HRGF HRGF is a fast-growing Engineering consultancy & Tech company for meeting Saudi Arabia's 2030 vision. Weve just opened our new Chennai office and are building out a team to deliver the best enterprise web applications. About the Role Were looking for an experienced Senior Django Engineer to lead the development of a custom HR and Finance system from scratch. Youll play a key role in designing the architecture, building core modules (leave management, payroll, workflows), and laying the foundation for scalable, secure enterprise-grade software. This is a long-term product-oriented role. Beyond HR, you'll help shape the tech direction for future systems in our company (internal tools, integrations, dashboards, and potentially external platforms). Key Responsibilities Design and implement Django apps with clean, maintainable code. Build modular systems for employee profiles, payroll, leave workflows, documents, and evaluations. Set up RBAC (role-based access control) for multiple user types. Create admin dashboards, reports, and PDF/Excel exports/Imports. Handle file uploads, secure document storage, and expiry workflows. Build internal notifications (email + in-app) using Celery. Write unit/integration tests and CI pipelines. Lead code reviews, mentor junior developers, and establish best practices Collaborate with the product owner to refine features and timelines Set up and maintain DevOps tools (Docker, Nginx, Gunicorn, PostgreSQL) Requirements 4+ years of experience with Django (class-based views, ORM, auth, forms, signals) Strong Python fundamentals and Django best practices Experience with PostgreSQL , Docker , Git , Celery , and deployment pipelines Solid understanding of REST APIs , background tasks, and secure architecture Able to build and document complex workflows (approvals, salary rules, document alerts) Experience with admin interfaces , PDF generation (WeasyPrint, ReportLab, etc.) Familiarity with building bilingual (Arabic/English) systems is a plus Comfortable working independently and collaboratively in a remote setup Nice to Have Experience with Django REST Framework Familiarity with React (for dashboards or SPA use) Prior work on HR, payroll, or enterprise admin systems DevOps experience (GCP, VPS setup, Nginx, backups, CI/CD) Interview Process Take-Home Assignment Deliverables: GitHub repo link with clear README and setup instructions Timeline: 7 days from when the assignment is shared Goal: Show your approach to translating business requirements into a working Django app Technical Interview (Google Meet or Zoom) Duration: ~60 minutes Format: Live walkthrough of your submitted code + Q&A Focus Areas: How did you translate system requirements into Django models, views, and templates Reasoning behind your architecture and API design choices Your approach to error handling, performance, and testing Your general software engineering problem-solving skills

Posted 2 months ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are seeking a highly skilled Python Developer with Strong AWS & Terraform experience. The ideal candidate must possess strong Python development capabilities, robust hands-on experience with AWS, and a working knowledge of Terraform. This role also requires foundational SQL skills and the ability to integrate and automate various backend and cloud services. Requirements and Qualifications: 57+ years of overall experience in software development Strong proficiency in Python development Extensive experience working with AWS services (Lambda, API Gateway, CloudWatch, etc.) Hands-on experience with Terraform Basic understanding of SQL Experience with REST APIs, Salesforce SOQL Familiarity with tools and platforms such as Git, GitHub Actions, Dell Boomi, Snowflake Knowledge of QA Automation using Python, Cucumber, Gherkin, Postman Roles and Responsibilities: Integrate automation frameworks with AWS, X-Ray, and Boomi services Develop backend automation scripts for Boomi processes Build utility tools for Salesforce data cleanup and EDI document generation Create and manage automated triggers in the test framework using AWS services (Lambda, API Gateway, etc.) Develop utilities for internal EDI processes integrating third-party applications (Salesforce, Dell Boomi, AWS, Snowflake, X-Ray) Integrate utilities into Cucumber Automation Framework Connect automation framework with Jira and X-Ray for test reporting Automate test cases for various EDI processes Collaborate on development and integration using Python, REST APIs, AWS, and other modern tools Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote

Posted 2 months ago

Apply

3.0 - 5.0 years

15 - 18 Lacs

Chennai

Hybrid

Job Title: AWS DevOps Engineer (Mid-Level) Location: Hybrid Role Overview As an AWS DevOps Engineer, youll own the end-to-end infrastructure lifecyclefrom design and provisioning through deployment, monitoring, and optimization. Youll collaborate closely with development teams to implement Infrastructure as Code, build robust CI/CD pipelines, enforce security and compliance guardrails, and integrate next-gen tools like Google Gemini for automated code-quality and security checks. Key Responsibilities Infrastructure as Code (IaC): Design, build, and maintain Terraform (or CloudFormation) modules for VPCs, ECS/EKS clusters, RDS, ElastiCache, S3, IAM, KMS, and networking across multiple Availability Zones. Produce clear architecture diagrams (Mermaid or draw.io) and documentation. CI/CD Pipeline Development: Implement GitHub Actions or AWS CodePipeline/CodeBuild workflows to run linting, unit tests, Terraform validation, Docker builds, and automated deployments (zero-downtime rolling updates) to ECS/EKS. Integrate unit tests (Jest, pytest) and configuration-driven services (SSM Parameter Store). Monitoring & Alerting: Define custom CloudWatch metrics (latency, error rates), create dashboards, and centralize application logs in CloudWatch Logs with structured outputs and PII filtration. Implement CloudWatch Alarms with SNS notifications for key thresholds (CPU, replica lag, 5xx errors). Security & Compliance: Enable and configure GuardDuty and AWS Config rules (e.g., public-CIDR security groups, unencrypted S3 or RDS). Enforce least-privilege IAM policies, key-management with KMS, and secure secret storage in SSM Parameter Store. Innovative Tooling Integration: Integrate Google Gemini (or similar) into the CI pipeline for automated Terraform security scans and generation of actionable security reports as PR comments. Documentation & Collaboration: Maintain clear README files, module documentation, and step-by-step deployment guides. Participate in code reviews, design discussions, and post-mortems to continuously improve our DevOps practices. Required Qualifications Experience: 3+ years in AWS DevOps or Site Reliability Engineering roles, designing and operating production-grade cloud infrastructure. Technical Skills: Terraform (preferred) or CloudFormation for IaC. Container orchestration: ECS/Fargate or EKS with zero-downtime deployments. CI/CD: GitHub Actions, AWS CodePipeline, and CodeBuild (linting, testing, Docker, Terraform). Monitoring: CloudWatch Dashboards, custom metrics, log centralization, and alarm configurations. Security & Compliance: IAM policy design, KMS, GuardDuty, AWS Config, SSM Parameter Store. Scripting: Python, Bash, or Node.js for automation and Lambda functions. Soft Skills: Strong problem-solving mindset and attention to detail. Excellent written and verbal communication for documentation and cross-team collaboration. Ability to own projects end-to-end and deliver under tight timelines. Preferred Qualifications Hands-on experience integrating third-party security or code-analysis APIs (e.g., Google Gemini, Prisma Cloud). Familiarity with monitoring and observability best practices, including custom metric creation. Exposure to multi-cloud environments or hybrid cloud architectures. Certification: AWS Certified DevOps Engineer Professional or AWS Certified Solutions Architect Associate.

Posted 2 months ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

Role Overview: We are looking for a Cloud Engineer who can work across the entire web development stack to build robust, scalable, and user-centric applications for our client You will play a critical role in designing and delivering systems end-to-endfrom sleek, responsive UIs to resilient backend services and APIs Whether you're just starting your career or bringing seasoned expertise, were looking for hands-on problem solvers with a passion for clean code and great product experiences Responsibilities: Design and implement secure, scalable, and cost-optimized cloud infrastructure using AWS/GCP/Azure services Automate infrastructure provisioning and management using Infrastructure as Code (IaC) tools like Terraform or CloudFormation Set up and maintain CI/CD pipelines for smooth and reliable software delivery Monitor system performance, availability, and incident response using modern observability tools (e g , CloudWatch, Datadog, ELK, Prometheus) Ensure robust cloud security by managing IAM policies, encryption, and secrets Collaborate closely with backend engineers, data teams, and DevOps to support deployment and system stability Optimize cloud costs and usage through rightsizing, autoscaling, and resource cleanups Required Skills: Hands-on experience with cloud platforms: AWS, Azure, or GCP (preferably AWS) Proficiency in IaC tools: Terraform, CloudFormation, or Pulumi Experience with containerization and orchestration: Docker and Kubernetes Strong scripting skills in Bash, Python, or similar Deep understanding of networking, firewalls, load balancing, and VPC setups Experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI) and Git workflows Familiarity with monitoring and logging stacks (Prometheus, Grafana, ELK, etc) Sound knowledge of cloud security, IAM, and access control best practices Nice to Have: Exposure to serverless architecture (AWS Lambda, GCP Cloud Functions) Experience in multi-cloud or hybrid cloud environments Familiarity with cloud-native database services (eg, RDS, DynamoDB, Firestore) Awareness of compliance frameworks (SOC2, GDPR, HIPAA) and cloud governance practices Educational Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related technical field Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 2 months ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Hyderabad, Ahmedabad, Bengaluru

Work from Office

Senior iOS Developer Build Mission-Critical Health-Tech Apps Company: Ajmera Infotech Private Limited (AIPL) Location: Ahmedabad (On-site) Experience: 5 8 years Position Type: Full-time, Permanent Shape Mobile Experiences That Save Lives AIPLs 120-engineer team powers planet-scale systems for global innovators. We are assembling a specialised iOS squad to build FDA-compliant, SwiftUI-first apps for a billion-dollar health-tech platform (client name confidential). Your code will run on iPhones and iPads used daily by clinicians and patients worldwide software that simply cannot fail. What Makes This Role Exciting Greenfield + Legacy modernisation craft new modules in SwiftUI while refactoring existing UIKit code into clean architecture. Deep integration BLE peripherals, secure real-time data sync, offline workflows, Core Bluetooth, HealthKit, biometrics. Engineering ownership – influence architecture, CI/CD, security, and performance from day one. Global collaboration – pair with US & EU experts on coding standards, code reviews, and mobile DevOps. Compliance challenge – learn FDA, HIPAA, and 21 CFR Part 11 practices—career-accelerating knowledge. Key Responsibilities Design, build, and maintain high-performance iOS apps in Swift (80 %+ SwiftUI) . Lead migration from UIKit to SwiftUI and implement MVVM / Clean Architecture patterns. Integrate REST/gRPC services, WebSockets, and Bluetooth Low Energy devices. Optimise for battery, memory, accessibility, and security (OWASP MASVS). Write unit, UI, and integration tests; champion TDD and CI/CD (GitHub Actions / Azure DevOps). Perform code reviews, mentor mid-level engineers, and uphold style guidelines. Collaborate with design, backend, and QA to deliver sprint goals and compliance artifacts. Contribute to mobile release pipeline, App Store deliverables, and post-release monitoring. Must-Have Skills 5-8 years iOS development; 3+ years in Swift with strong SwiftUI component knowledge. Production experience with SwiftUI and Combine . Hands-on with MVVM, Core Data, Core Bluetooth, URLSession / gRPC, Background Tasks . Proficient in unit/UI testing (XCTest, XCUITest) and static analysis (SwiftLint, Sonar). Familiar with App Store submission, TestFlight, phased release, and crash analytics (Firebase Crashlytics, Sentry). Solid Git, code review, and Agile-Scrum practice. Nice-to-Have Exposure to medical, fintech, or other regulated domains. Experience with Flutter or React Native. Knowledge of Swift Package Manager , KMM , or GraphQL . Familiarity with Azure DevOps or GitHub Actions mobile pipelines. What We Offer Above-market salary + performance bonus. Comprehensive medical insurance for you & family. Flexible hours and generous PTO. High-end workstation + access to our device lab. Sponsored certifications and conference passes. Ready to Code for Impact? Email your rsum/GitHub to jobs@ajmerainfotech.com with the subject “Senior iOS Developer | Ahmedabad” or click Apply on Naukri. Build software that improves lives—every single release.

Posted 2 months ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

The Opportunity " FICO is seeking an AWS Cloud Engineer who thrives working in a fast-paced state of the art Cloud environment. This position will be heavily involved with the migration of our existing products, as well as the development of new products within our cloud platform VP, Cloud Engineering What Youll Contribute Design, maintain and expand our infrastructure (maintained as IaaC). Oversee systems and resources for alerting, logging, backups, disaster recovery, and monitoring. Work jointly with other software engineering teams in building fault-tolerant & resilient services that are in line with our infrastructure's best practices. Improve the performance and durability of our CI/CD pipelines. Think with security in your mind all the time. You may be asked to be on-call to assist with engineering projects. What Were Seeking Bachelors degree in Computer Science or related field or relevant experience. Ability to design and implement highly automated and holistic solutions. Act as a tech lead for the team and possess a strong ability to lead by example and drive projects. 8+ years of relevant experience in cloud domain. Hands on experience with a cloud-provider (preferably AWS) by maintaining or even deploying production applications infrastructure. Significant experience with distributed systems and container orchestration (specifically Kubernetes deployments). Proficiency in developing and maintaining CI/CD pipelines using GitHub, GitHub Actions, Jenkins, Helm, Harness, ArgoCD etc. Strong grasp of Infrastructure as Code (IAC), preferably using Terraform or Crossplane composition. Experience hosting/supporting the Atlassian suite of tools; Specifically Jira, Confluence, and Bitbucket. Scripting knowledge in Python/Ruby/Bash. Strong automation mindset and experience using APIs to automate administrative tasks. Hands-on experience with monitoring and logging tools, like Splunk.

Posted 2 months ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Overview: . Leading AI-driven Global Supply Chain Solutions Software Product Company and one of Glassdoor's Best Places to Work . Seeking an astute and dynamic individual with strong skills to build quality processes for large teams especially in test automation with deep knowledge of industry best practices, and the ablity to implement them working with both the platform and the product teams. Scope: . Core responsibility is to build the QA and Test automation as per strategy designed for our Control Tower Our current technical environment: . Cloud Architecture: MS Azure (ARM templates, AKS, Application gateway, Virtue Networks, Azure AD), SnowFlake, Big Data . Software: Java, Python, Spring, Maven, Gradle, GIT, Rest API, OAuth . Automation Frameworks: Rest Assured, Junit, JMeter, Blazemeter, other inhouse frameworks What you'll do: . Works with the team on quality assurance and quality control of a product during the development and execution phases. Is part of a self-organized agile team and interacts with team members on release deliverables. Demonstrates the capability to write, execute, and maintain automation test scripts. Installs and configures relevant products for the purpose of quality assurance, providing business value. . Develops and maintains test cases for a given modules of a relevant product. Executes the manual test cases and scenarios and publishes the results. Participates in test case reviews. Develops and maintain test data for some modules. Configures relevant products in all supported test environments. Conducts exploratory testing as needed or planned. Writes, executes and maintains automation test scripts. Identifies and reports software defects in an appropriate manner, and follows the defined defect lifecycle. Works with team members in troubleshooting the root cause of a defect and resolves the issue. Works with test management and test execution tools (such as viz., HP QC, and JIRA). Understands business requirements provided. Follows standard development processes and procedures. Plans and prioritizes work tasks with input from their manager. Proactively notifies managers of impairments to commitments. Proactively seeks or provides assistance as required. What we are looking for : . Bachelor's degree in Software Engineering or Computer Science with 1.5 or more years of relevant hands on work experience. . Experience with Java based frameworks such as Spring or Python based frameworks. . Experience with debugging cloud native applications. . Experience with API automation such as Rest Assured. . Experience with UI automation framework based on wdio/ selenium. . Experience in integration testing of multiple microservices. . Experience with open cloud platforms such as Azure. . Knowledge of security authentication and authorization standards such as OAuth and SSO. . Familiarity with continuous integration, continuous delivery - CI/CD using Jenkins and Github Actions. . Good analytical and communication skills . Familiarity with code versioning tools such as Git and Stash/Bitbucket Serve. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success - and the success of our customers. Does your heart beat like ours Find out here: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.

Posted 2 months ago

Apply

6.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introduction IBM Security Verify is placed in Gartner Leadership Quadrant as a cloud-based Identity and Access Management (IAM) solution that helps organizations manage user identities and access to applications and resources. It provides features like multi-factor authentication, single sign-on, risk-based authentication, and adaptive access as well as user lifecycle journeys along with associated governance, aiming to protect customer, workforce, and privileged identities. The solution also offers identity analytics to provide insights into user behaviour and potential risks. Your role and responsibilities . Take ownership of end-to-end development of full-stack features across the application lifecycle. . Develop modern, reusable React-based UI components integrated with backend APIs. . Design, implement, and maintain scalable REST APIs using Java or GoLang. . Collaborate closely with cross-functional teams including product, design, DevOps, and QA. . Contribute to architecture discussions and recommend technical improvements. . Implement and maintain test automation to ensure product reliability. . Troubleshoot and resolve production issues in collaboration with other engineers. . Actively participate in Agile ceremonies (daily stand-ups, sprint planning, retrospectives) and contribute to continuous improvement. . Mentor junior developers and provide code reviews to ensure best practices. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise . 6+ years of hands-on experience in designing and developing cloud-based enterprise applications with both frontend and backend components. . Proficient in front-end technologies such as React, JavaScript (ES6+), HTML5, and CSS3 for building responsive user interfaces. . Backend development experience using Java and GoLang, with understanding of RESTful service integration and API consumption. . Hands-on experience with Postgres or similar databases including data modeling and query optimization. . Experience in building and deploying full stack solutions on AWS or RedHat OpenShift (OCP/ROSA). . Good knowledge of CI/CD pipelines (e.g., GitHub Actions, Jenkins), and experience with version control systems like Git. . Understanding of to monitoring/logging tools such as Grafana, ELK Stack, or Instana. . Experience writing and maintaining automated tests using tools like JUnit, Selenium, SonarQube, and familiarity with frontend testing tools (e.g., Jest). . Exposure to containerization (Docker) and basic Kubernetes deployment workflows. . Understanding of Agile methodologies, including participation in daily stand-ups, sprint planning, and retrospectives. . Effective team collaboration and communication skills. . Experience with Shell scripting and basic Node.js utilities would be added advantage Preferred technical and professional experience . Experience in implementing or designing microservices and reusable backend components. . Familiarity with design systems (e.g., Carbon Design, Material UI). . Awareness of accessibility standards like WCAG or Section 508. . Exposure to security practices, including privacy by design, secure coding, or basic cryptography as well as cryptographic algorithms, protocols (e.g., TLS, FIPS), and Java security frameworks. . Experience with Shell scripting or Node.js is a plus. . Understanding of basic DevSecOps practices or interest in expanding security expertise.

Posted 2 months ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Hyderabad

Work from Office

DevOps Manager - J49058 Job Summary We are looking for an experienced DevOps Manager with 10+ years of experience to lead our DevOps initiatives across AWS and GCP platforms. The ideal candidate will have expertise in cloud migration, CDN deployment, infrastructure automation, and stakeholder reporting. This role requires managing a team of 6 mid-level DevOps engineers and ensuring high availability, security, and scalability of our cloud infrastructure. Key Responsibilities Cloud & Infrastructure Management - Manage and optimize cloud infrastructure on AWS and GCP. - Lead cloud migration projects from on-premise or other cloud environments. - Deploy and manage CDN solutions for improved performance and scalability. - Ensure cost optimization, high availability, and disaster recovery best practices. Infrastructure Automation & CI/CD - Implement Infrastructure as Code (IaC) using Terraform, Ansible, or similar tools. - Automate deployment pipelines using CI/CD tools (GitHub Actions, Jenkins, AWS DevOps, or Google Cloud Build). - Drive DevOps best practices, including containerization (Docker, Kubernetes) and serverless architectures. Monitoring, Security & Compliance - Set up logging, monitoring, and alerting using tools like Prometheus, Grafana, AWS Monitor, and GCP Stackdriver. - Ensure security best practices, including identity management, access controls, and compliance with industry standards. - Conduct periodic security audits and vulnerability assessments. Stakeholder Communication & Reporting - Prepare and send detailed reports on system performance, uptime, cost, and incidents to all stakeholders. - Work closely with engineering, product, and security teams to align DevOps strategies with business goals. - Maintain documentation for infrastructure, processes, and best practices. Team Leadership & Collaboration - Lead and mentor a team of 6 mid-level DevOps/SRE engineers. - Conduct training and knowledge-sharing sessions to upskill the team. - Establish KPIs and performance metrics to track team progress and efficiency. Required Skills & Experience - 10+ years of DevOps experience, with at least 3 years in a leadership role. - Hands-on experience with AWS and GCP cloud platforms. - Expertise in cloud migration & CDN deployment (e.g., Cloudflare, Akamai, CloudFront, AWS CDN). - Strong knowledge of Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. - Experience with CI/CD pipelines (AWS DevOps, Jenkins, GitHub Actions, GCP Cloud Build). - Proficiency in Kubernetes, Docker, and container orchestration. - Strong monitoring and logging skills using AWS Monitor, GCP Stackdriver, Prometheus, Grafana, Splunk. - Excellent communication skills for stakeholder reporting and cross-functional collaboration. - Ability to lead and mentor a team, ensuring high efficiency and skill growth. Nice to Have - Experience with multi-cloud environments (AWS, GCP, ). - Knowledge of serverless architectures (AWS Functions, Google Cloud Functions). - Familiarity with FinOps for cloud cost management. Location & Work Mode - Location:Hyderabad/Bangalore - Work Mode: Office Why Join Us? - Opportunity to work with cutting-edge cloud technologies and automation. - Lead a talented team and drive impactful cloud transformation projects. - Competitive salary, benefits, and career growth opportunities. Required Candidate profile Candidate Experience Should Be : 10 To 15 Candidate Degree Should Be : BA,BBA,BBA/BMS,BBI,BCA,BCom,BCS,BDES,BE-Comp/IT,BEd,BE-Other,BFA,BFM,BIS,BIT,BMS,BSc-Comp/IT,BSc-Other,BTech-Comp/IT,BTech-Other,CA,CS,DCA,DCS,DE-Comp/IT,DE-Other,Diploma,ICWA,LLB,MA,MBA,MBBS,MCA,MCM,MCom,MCS,ME-Comp/IT,ME-Other,MIS,MIT,MMS,MSc-Comp/IT,MS-Comp/IT,MSc-Other,MS-Other,MTech-Comp/IT

Posted 2 months ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities 5 to 8 Years relevant software development experience with fairly Full stack profile. Proficient in .Net Core with Angular, with hands on coding in .Net core. Proficient in Web API, MVC and Microservices. Proficient with Azure Platform Development (Azure Functions, Azure Services etc). Proficient in one or more of Data Development (SQL Databases, No SQL, Cloud Datastores etc) technologies. Proficient in Cloud Native Deployment with CI/CD Pipelines with SonarQube. [One of GitHub Actions or Azure DevOps] into serverless containers (Kubernetes, Docker). Experience in Agile teams applying the best architectural, design, unit testing patterns & practices with an eye for code quality and standards Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Relevant software development experience with fairly Full stack profile Proficient in .Net Core with Angular, with hands on coding in .Net core . Proficient in Web API, MVC and Microservices. Proficient with Azure Platform Development (Azure Functions, Azure Services etc) Preferred technical and professional experience .Net Azure Full stack Proficient in .Net Core with Angular, with hands on coding in .Net core

Posted 2 months ago

Apply

2.0 - 4.0 years

0 Lacs

, India

On-site

About the Role: 09 The Team : Join a team renowned for its expertise, innovation, and passion. As part of our agile product development group, you'll work with cutting-edge technology to drive insights into global capital markets and the financial services industry. This is an exciting opportunity to contribute to a fast-growing global organization, collaborating closely with talented colleagues and stakeholders to achieve ambitious goals. Your Impact: As a Cloud Platform Engineer, you will play a critical role in designing, deploying, and managing cloud infrastructure and applications, ensuring reliability, scalability, and efficiency. You will work with AWS (preferred), GCP, and Azure, leveraging automation and best practices to streamline operations and optimize cloud environments. Key Responsibilities: Cloud Infrastructure Management: Architect, implement, and maintain cloud-based solutions across AWS, GCP, and Azure, with a focus on high availability, fault tolerance, and scalability. (AWS preferred) Infrastructure as Code (IaC): Develop and manage cloud resources using Terraform to automate deployments efficiently. Containerization & Orchestration: Configure and manage containerized environments using Docker (preferred) and Kubernetes. CI/CD & Automation: Build and maintain continuous integration and deployment (CI/CD) pipelines using GitHub Actions, TeamCity, or Azure DevOps. Security & Compliance: Ensure cloud environments align with industry standards and company security policies. Monitoring & Optimization: Monitor cloud infrastructure, implement performance improvements, and drive cost efficiencies. Collaboration & Mentorship: Work closely with development teams to architect cloud solutions and provide technical guidance to junior engineers. What We're Looking For: Experience: 2-4 years in DevOps/Cloud Platforms (AWS, GCP, or Azure). DevOps & CI/CD Expertise: Strong knowledge of DevOps principles and experience with CI/CD pipelines, particularly GitHub Actions. Programming & Automation: Proficiency in Python/Bash/Powershell for scripting and automation. Infrastructure as Code (IaC): Hands-on experience with Terraform, CloudFormation, or similar IaC tools. Operating Systems: Comfortable working with both Windows and Linux environments. Agile Development: Experience in Agile methodologies for software development. Networking Knowledge: Solid understanding of cloud networking, including VPCs, subnetting, routing, and connectivity troubleshooting. Preferred Qualifications: Software Development Background : Prior experience as a software developer or working closely with development teams, with strong understanding of application architecture and code deployment best practices. Experience with Kubernetes and Docker for container orchestration. Hands-on experience with AWS Lambda or Azure Functions. Strong knowledge of logging, tracing, and debugging using CloudWatch, Splunk, DataDog, or similar tools. Familiarity with AWS Managed Active Directory or Azure Active Directory configuration. What's In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide-so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We're committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We're constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That's why we provide everything you-and your career-need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It's not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference. For more information on benefits by country visit: Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)

Posted 2 months ago

Apply

5.0 - 10.0 years

6 - 12 Lacs

Kolkata

Work from Office

At Gintaa, were redefining how India orders food. With our focus on affordability, exclusive restaurant partnerships, and hyperlocal logistics, we aim to scale across India's Tier 1 and Tier 2 cities. Were backed by a mission-driven team and expanding rapidly now’s the time to join the core tech leadership and build something impactful from the ground up. Job Summary We are looking for an experienced and motivated DevOps Engineer with 5–7 years of hands-on experience designing, implementing, and managing cloud infrastructure—particularly on Google Cloud Platform (GCP) and Amazon Web Services (AWS). The ideal candidate will have deep expertise in infrastructure as code (IaC), CI/CD pipelines, container orchestration, and cloud-native technologies. This role requires strong analytical skills, attention to detail, and a passion for optimizing cloud infrastructure performance and cost across multi-cloud environments. Key Responsibilities Multi-Cloud Infrastructure: Design, implement, and maintain scalable, reliable, and secure cloud infrastructure using GCP services (Compute Engine, GKE, Cloud Functions, Pub/Sub, BigQuery, Cloud Storage) and AWS services (EC2, ECS/EKS, Lambda, S3, RDS, CloudFront). CI/CD & GitOps: Build and manage CI/CD pipelines using GitHub/GitLab Actions, artifact repositories, and enforce GitOps practices across both GCP and AWS environments. Containerization & Serverless: Leverage Docker, Kubernetes (GKE/EKS), and serverless architectures (Cloud Functions, AWS Lambda) to support microservices and modern application deployments. Infrastructure as Code: Develop and manage IaC using Terraform (or CloudFormation for AWS) to automate provisioning and drift-detection across clouds. Observability & Monitoring: Implement observability tools like Prometheus, Grafana, Google Cloud Monitoring, and AWS CloudWatch for real-time system insights. Security & Compliance: Ensure best practices in cloud security, including IAM policies (GCP IAM + AWS IAM), encryption standards (KMS), network security (VPCs, Security Groups, Firewalls), and compliance frameworks. Service Mesh: Integrate and manage service mesh architectures such as Istio or Linkerd for secure and observable microservices communication. Troubleshooting & DR: Troubleshoot and resolve infrastructure issues, ensure high availability, disaster recovery (GCP Backup + AWS Backup/AWS DR strategies), and performance optimization. Cost Management: Drive initiatives for cloud cost management; use tools like GCP Cost Management and AWS Cost Explorer to suggest optimization strategies. Documentation & Knowledge Transfer: Document technical architectures, processes, and procedures; ensure smooth knowledge transfer and operational readiness. Cross-Functional Collaboration: Collaborate with Development, QA, Security, and Architecture teams to streamline deployment workflows. Required Skills & Qualifications 5–7 years of DevOps/Cloud Engineering experience, with at least 3 years on GCP and 3 years on AWS. Proficiency in Terraform (plus familiarity with CloudFormation), Docker, Kubernetes (GKE/EKS), and other DevOps toolchains. Strong experience with CI/CD tools (GitHub/GitLab Actions) and artifact repositories. Deep understanding of cloud networking, VPCs, load balancing, security groups, firewalls, and VPNs in both GCP and AWS. Expertise in monitoring/logging frameworks such as Prometheus, Grafana, Stackdriver (Cloud Monitoring), and AWS CloudWatch/CloudTrail. Strong scripting skills in Python, Bash, or Go for automation tasks. Knowledge of data backup, high-availability systems, and disaster recovery strategies across multi-cloud. Familiarity with service mesh technologies and microservices-based architecture. Excellent analytical, troubleshooting, and documentation skills. Effective communication and ability to work in a fast-paced, collaborative environment. Preferred Qualifications (Good to Have) Google Professional Cloud Architect Certification and/or AWS Certified Solutions Architect – Professional. Experience with multi-cloud or hybrid cloud setups, including VPN/Direct Connect and Interconnect configurations. Exposure to agile software development, DevSecOps, and compliance-driven environments (e.g., BFSI, Healthcare). Understanding of cost modeling and cloud billing analysis tools. Why Join Gintaa? Be a part of a purpose-driven startup revolutionizing food and local commerce in India. Build impactful, large-scale mobile applications from scratch. Work with a visionary leadership team and dynamic, entrepreneurial culture. Competitive salary and leadership visibility.

Posted 2 months ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

As a Specialist in our SaaS DevOps team, you will drive the design and implementation of cutting-edge Continuous Integration and Delivery pipelines for cloud-native data products. Youll play a pivotal role in automating deployments, integrating AI-driven analytics, and shaping scalable, secure processes that elevate our DevOps capabilities to world-class standards. This is a hands-on leadership opportunity to influence technology and strategy in a fast-paced, innovative environment. If youre passionate about CI/CD, cloud-native tech, and leading high-impact DevOps initiatives, this role is made for you. You have: Bachelors or masters degree in computer science, Engineering, or a related field with 8+ years of experience in software development or DevOps, with expertise in designing and managing complex CD pipelines. Practical experience with CI/CD tools such as Jenkins, Argo CD, GCP DevOps, CircleCI, or GitHub Actions. Done scripting using Bash, Python, or Groovy. Hands-on experience with containerization (Docker) and orchestration platforms (Kubernetes, ECS). It would be nice if you also had: Skills in Infrastructure as Code tools like Terraform, Ansible, and CloudFormation. Knowledge of cloud platforms including GCP, AWS, and Azure and familiarity with Agile, Scrum, and DevOps best practices. Certifications in GCP DevOps, Kubernetes, or related cloud technologies. Telecom experience with a focus on compliance and security. Establish and maintain best practices for CI/CD pipelines, encompassing version control and automation. Design and implement scalable, secure cloud-based CI/CD pipelines. Collaborate with cross-functional teams to automate build, test, and deployment processes. Define, implement, and enforce delivery standards and policies. Optimize CI/CD lifecycle performance by identifying and resolving bottlenecks. Lead IaC initiatives, mentor team members, and track key performance indicators (KPIs) for delivery processes.

Posted 2 months ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Chennai

Work from Office

Overview DevOps Engineer \u2013 OpenShift (OCP) Specialist Job Summary: FSS is seeking a highly skilled DevOps Engineer with hands-on experience in Red Hat OpenShift Container Platform (OCP) and associated tools like Argo CD, Jenkins, and Data Grid. The ideal candidate will drive automation, manage containerized environments, and ensure smooth CI/CD pipelines across hybrid infrastructure to support our financial technology solutions. Required Skills & Qualifications: Technical Skills: Strong hands-on experience with OpenShift (v4.x) administration and operations. Proficiency in CI/CD tools: Jenkins, Argo CD, GitHub Actions, GitLab CI/CD. Deep understanding of Kubernetes, Docker, and container orchestration. Experience with Red Hat Data Grid or other in-memory data grids. Skilled in IaC tools: Terraform, Ansible, CloudFormation. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, Splunk). Proficient in scripting languages: Bash, Python, or Shell. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities across cross-functional teams. Candidates should be able to work independently. Candidate should be able to provide solution based on customer requirements and work with customer\u2019s DevOps team during the project implementation. Responsibilities Key Responsibilities: OpenShift Platform Engineering: Deploy, manage, and maintain applications on OpenShift Container Platform. Configure and manage Operators, Helm charts, and OpenShift GitOps (Argo CD). Manage Red Hat Data Grid deployments and integrations. Support OCP cluster upgrades, patching, and troubleshooting. CI/CD Implementation & Automation: Design, implement, and manage CI/CD pipelines using Jenkins and Argo CD. Ensure seamless code integration, testing, and deployment processes with development teams. Infrastructure as Code (IaC): Automate infrastructure provisioning with tools like Terraform and Ansible. Manage hybrid infrastructure across on-prem and public clouds (AWS, Azure, or GCP). Monitoring & Performance Optimization: Implement and manage observability stacks (Prometheus, Grafana, ELK, etc.) for OCP and underlying services. Proactively identify and resolve system performance bottlenecks. Security & Compliance: Enforce security best practices in containerized and cloud environments. Conduct vulnerability assessments and ensure compliance with industry standards. Collaboration & Support: Collaborate with developers, QA, and IT teams to optimize DevOps workflows. Provide ongoing support and incident response for production and non-production environments. Qualifications BE, B-tech,MCA or Equivalent degree Payment gateway, Bank reconciliation, Card, Payment gateway Essential skills Technical Skills: Strong hands-on experience with OpenShift (v4.x) administration and operations. Proficiency in CI/CD tools: Jenkins, Argo CD, GitHub Actions, GitLab CI/CD. Deep understanding of Kubernetes, Docker, and container orchestration. Experience with Red Hat Data Grid or other in-memory data grids. Skilled in IaC tools: Terraform, Ansible, CloudFormation. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, Splunk). Proficient in scripting languages: Bash, Python, or Shell. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities across cross-functional teams. Candidates should be able to work independently. Candidate should be able to provide solution based on customer requirements and work with customer\u2019s DevOps team during the project implementation. Desired skills Technical Skills: Strong hands-on experience with OpenShift (v4.x) administration and operations. Proficiency in CI/CD tools: Jenkins, Argo CD, GitHub Actions, GitLab CI/CD. Deep understanding of Kubernetes, Docker, and container orchestration. Experience with Red Hat Data Grid or other in-memory data grids. Skilled in IaC tools: Terraform, Ansible, CloudFormation. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, Splunk). Proficient in scripting languages: Bash, Python, or Shell. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities across cross-functional teams. Candidates should be able to work independently. Candidate should be able to provide solution based on customer requirements and work with customer\u2019s DevOps team during the project implementation.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies