Jobs
Interviews

581 Github Actions Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Experience Required: 5+ Years Shift Timings: 12 PM 9 PM IST Employment Type: Full-Time Job Summary We are looking for a highly skilled and experienced React Native Developer with over 5 years of proven experience in building dynamic and responsive mobile applications. The ideal candidate should have a strong command of mobile development principles, be well-versed in working with mobile emulators, and possess good exposure to Agile methodologies. As a Lead Consultant , you will be expected to guide development teams and deliver high-quality mobile solutions that meet business objectives. Key Responsibilities Design and develop cross-platform mobile applications using React Native. Build clean, maintainable, and reusable code for mobile apps. Integrate third-party APIs and native modules as required. Ensure mobile applications are optimized for performance and scalability. Use mobile emulators and real devices for testing and debugging. Collaborate with UI/UX designers and back-end developers. Participate in Agile ceremonies (Scrum, Sprint Planning, Reviews, Retrospectives). Provide technical leadership and mentoring to junior developers. Ensure adherence to CI/CD pipelines and contribute to their improvement. Conduct code reviews and ensure code quality standards are met. Primary Skills Required 5+ years of experience in mobile app development with React Native Strong understanding of JavaScript, TypeScript, Redux, and React Navigation Experience in mobile emulator testing and debugging across multiple device types Hands-on experience with RESTful APIs, mobile databases, and offline storage solutions Familiarity with the full mobile development life cycle Working knowledge of Android Studio and Xcode Strong understanding of Agile methodologies and related tools (e.g., JIRA) Secondary Skills Experience with CI/CD tools and pipelines as a developer (e.g., Jenkins, GitHub Actions, Bitrise) Basic understanding of DevOps and automated deployment practices Familiarity with version control systems like Git Desired Candidate Profile Ability to work independently and manage tasks effectively during the 129 PM shift Excellent communication and interpersonal skills Ability to lead and mentor development teams in a collaborative environment Strong analytical and problem-solving skills Skills: ci/cd tools,agile methodologies,react native,github actions,redux,mobile applications,ci/cd,android studio,jenkins,mobile databases , xcode , typescript , react navigation,offline storage solutions,restful apis,git,bitrise,version control systems,javascript

Posted 1 month ago

Apply

8.0 - 12.0 years

22 - 27 Lacs

Indore, Chennai

Work from Office

We are hiring a Senior Python DevOps Engineer to develop scalable apps using Flask/FastAPI, automate CI/CD, manage cloud and ML workflows, and support containerized deployments in OpenShift environments. Required Candidate profile 8+ years in Python DevOps with expertise in Flask, FastAPI, CI/CD, cloud, ML workflows, and OpenShift. Skilled in automation, backend optimization, and global team collaboration.

Posted 1 month ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Coimbatore, Tamil Nadu, India

On-site

Key Responsibilities: Design, develop, and manage Infrastructure as Code (IaC) using Terraform for provisioning Azure services. Implement and maintain CI/CD pipelines using Azure DevOps, GitHub Actions, Argo CD, and Bamboo. Deploy applications and infrastructure using YAML, Helm charts, and native Azure deployment tools. Provide technical leadership in migrating workloads from AWS to Azure, ensuring optimal performance and security. Manage and support containerized applications using Kubernetes and Helm in Azure environments. Design robust, scalable, and secure Azure infrastructure solutions (compute, storage, network, database, and monitoring). Troubleshoot deployment, integration, and infrastructure issues across cloud environments. Collaborate with cross-functional teams to deliver infrastructure and DevOps solutions aligned with project goals. Support monitoring and performance optimization using Azure Monitor and other tools. Required Qualifications & Skills: 8+ years of hands-on experience in Azure infrastructure and DevOps engineering. Deep expertise in Terraform, YAML, and Azure CLI/ARM templates. Strong hands-on experience with core Azure services: compute, networking, storage, app services, etc. Experience with CI/CD tools such as GitHub Actions, Azure DevOps, Bamboo, and Argo CD. Proficient in managing and deploying applications using Helm charts and Kubernetes. Proven experience in migrating cloud workloads from AWS to Azure. Strong knowledge of Azure IaaS/PaaS, containerization, and DevOps best practices. Excellent troubleshooting and debugging skills across build, deployment, and infrastructure pipelines. Strong verbal and written communication skills for collaboration and documentation. Preferred Certifications (Nice to Have): AZ-400 Designing and Implementing Microsoft DevOps Solutions HashiCorp Certified: Terraform Associate Good to Have Skills: Experience with Argo CD, Bamboo, Tekton, and other CI/CD tools. Familiarity with AWS services to support migration projects.

Posted 1 month ago

Apply

6.0 - 9.0 years

10 - 18 Lacs

Bengaluru

Hybrid

Experience: 6 to 9 years Location: Bangalore Notice Period : immediate or 15 days Senior Devops Engineer

Posted 1 month ago

Apply

0.0 - 1.0 years

0 Lacs

Ahmedabad

Work from Office

Job Title: DevOps Intern Location: Ahmedabad (Work from Office) Duration: 3 to 6 Months Start Date: Immediate or As per Availability Company: FX31 Labs Role Overview: We are looking for a motivated and detail-oriented DevOps Intern to join our engineering team. As a DevOps Intern, you will assist in designing, implementing, and maintaining CI/CD pipelines, automating workflows, and supporting infrastructure deployments across development and production environments. Key Responsibilities: Assist in building and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Help in provisioning and managing cloud infrastructure (AWS, Azure, or GCP). Collaborate with developers to automate software deployment processes. Monitor and optimize system performance, availability, and reliability. Write basic scripts to automate repetitive DevOps tasks. Document internal processes, tools, and workflows. Support containerization (Docker) and orchestration (Kubernetes) initiatives. Required Skills: Basic understanding of Linux/Unix systems and shell scripting. Familiarity with version control systems like Git. Knowledge of DevOps concepts like CI/CD, Infrastructure as Code (IaC), and automation. Exposure to tools like Docker, Jenkins, Kubernetes (even theoretical understanding is a plus). Awareness of at least one cloud platform (AWS, Azure, or GCP). Strong problem-solving attitude and willingness to learn. Good to Have: Hands-on project or academic experience related to DevOps. Knowledge of Infrastructure as Code tools like Terraform or Ansible. Familiarity with monitoring tools (Grafana, Prometheus) or logging tools (ELK, Fluentd). Eligibility Criteria: Pursuing or recently completed a degree in Computer Science, IT, or related field. Available to work full-time from the Ahmedabad office for the duration of the internship. Perks: Certificate of Internship & Letter of Recommendation (on successful completion). Opportunity to work on real-time projects with mentorship. PPO opportunity for high-performing candidates. Hands-on exposure to industry-level DevOps tools and cloud platforms. About FX31 Labs: FX31 Labs is a fast-growing tech company focused on building innovative solutions in AI, data engineering, and product development. We foster a learning-rich environment and aim to empower individuals through hands-on experience in real-world projects.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Automation Tester In this role, the candidate will be responsible for ensuring high-quality software delivery by leveraging strong technical expertise and a meticulous approach to quality assurance. Responsibilities Write and execute reusable, modular, and scalable automation test scripts for functional, regression, and end-to-end testing Design, develop, and maintain web automation frameworks using tools like Playwright Design, develop, and maintain mobile automation frameworks using tools like Appium Plan and manage test automation activities, ensuring timely execution Perform detailed root cause analysis up and down the software stack using a variety of debugging tools Integrate automated tests into CI/CD pipelines using tools like Jenkins Ensure application compatibility across different browsers, platforms, and devices Perform API testing and automation . Maintain detailed test documentation, including test cases, automation strategy, execution logs, and defect reports Work closely with development, QA, and product teams to align automation strategies with project goals Manage test efforts using Jira for test case management and defect tracking. Conduct thorough testing of applications, ensuring alignment with user requirements and business goals Conducting UAT as per the strict guidelines Research and implement best practices, tools, and methodologies to improve testing efficiency and coverage Qualifications we seek in you! Minimum Qualifications / Skills Strong understanding of QA methodologies, tools, and processes. Solid understanding of Agile methodologies Strong experience in mobile testing and automation (both IOS and Android) Experience in writing and maintaining test documentations Extensive professional experience as a software engineer or SDET focusing on test automation Strong programming skills in language such as TypeScript or JavaScript Strong experience in Appium for mobile automation. Hands on experience with modern testing tools and frameworks such as BrowserStack , Webdriver.io, Playwright, and others. Experience with API testing and automation using tools like Postman , REST-assured or Request Basic k nowledge of database technologies Experience setting up and maintaining CI/CD pipelines using tools like Jenkins, CircleCI or GitHub Actions Excellent problem-solving skills and attention to detail Strong communication skills Preferred Qualifications/ Skills Experience doing Accessibility testing Experience with performance testing Experience working with cloud microservice architectures Experience with Infrastructure as Code for use in cloud computing platforms such as AWS, Heroku or GCP You hold a degree in Computer Science, Engineering, or in a similar field. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at and on , , , and . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .

Posted 1 month ago

Apply

5.0 - 10.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Responsibilities Design and implement cloud-based infrastructure (AWS, Azure, or GCP) Develop and maintain CI/CD pipelines to ensure smooth deployment and delivery processes Manage containerized environments (Docker, Kubernetes) and infrastructure-as-code (Terraform, Ansible) Monitor system health, performance, and security; respond to incidents and implement fixes Collaborate with development, QA, and security teams to streamline workflows and enhance automation Lead DevOps best practices and mentor junior engineers Optimize costs, performance, and scalability of infrastructure Ensure compliance with security standards and best practices Requirements 5+ years of experience in DevOps, SRE, or related roles Strong experience with cloud platforms (AWS, Azure, GCP) Proficiency with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.) Expertise in container orchestration (Kubernetes, Helm) Solid experience with infrastructure-as-code (Terraform, CloudFormation, Ansible) Good knowledge of monitoring/logging tools (Prometheus, Grafana, ELK, Datadog) Strong scripting skills (Bash, Python, or Go)

Posted 1 month ago

Apply

1.0 - 6.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus

Posted 1 month ago

Apply

6.0 - 12.0 years

12 - 24 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

This role is for one of the Weekday's clients Salary range: Rs 1200000 - Rs 2400000 (ie INR 12-24 LPA) Min Experience: 6 years Location: Hyderabad JobType: full-time We are looking for a seasonedAzure DevOps Engineerto lead the design, implementation, and management of DevOps practices within theMicrosoft Azureecosystem. The ideal candidate will bring deep expertise in automation, CI/CD pipelines, infrastructure as code (IaC), cloud-native tools, and security best practices. This position will collaborate closely with cross-functional teams to drive efficient, secure, and scalable DevOps workflows. Requirements Key Responsibilities: DevOps & CI/CD Implementation Build and maintain scalable CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins. Automate software build, testing, and deployment processes to improve release cycles. Integrate automated testing, security scanning, and code quality checks into the pipeline. Infrastructure as Code (IaC) & Cloud Automation Develop and maintain IaC templates usingTerraform,Bicep, orARM templates. Automate infrastructure provisioning, scaling, and monitoring across Azure environments. Ensure cloud cost optimization and resource efficiency. Monitoring, Logging & Security Configure monitoring tools likeAzure Monitor,App Insights, andLog Analytics. Apply Azure security best practices in CI/CD workflows and cloud architecture. ImplementRBAC,Key Vaultusage, and ensure policy and compliance adherence. Collaboration & Continuous Improvement Work with development, QA, and IT teams to enhance DevOps processes and workflows. Identify and resolve bottlenecks in deployment and infrastructure automation. Stay informed about industry trends and the latest features in Azure DevOps and IaC tooling. Required Skills & Experience: 57 years of hands-on experience inAzure DevOpsandcloud automation Strong knowledge of: Azure DevOps Services(Pipelines, Repos, Boards, Artifacts, Test Plans) CI/CD tools: YAML Pipelines, GitHub Actions, Jenkins Version control: Git (Azure Repos, GitHub, Bitbucket) IaC: Terraform, Bicep, ARM templates Containerization & Orchestration: Docker, Kubernetes (AKS) Monitoring: Azure Monitor, App Insights, Prometheus, Grafana Security: Azure Security Center, RBAC, Key Vault, Compliance Policy Management Familiarity with configuration management tools like Ansible, Puppet, or Chef (optional) Strong analytical and troubleshooting skills Excellent communication skills and ability to work in Agile/Scrum environments Preferred Certifications: Microsoft Certified:Azure DevOps Engineer Expert (AZ-400) Microsoft Certified:Azure Administrator Associate (AZ-104) Certified Kubernetes Administrator (CKA) optional Skills: Azure | DevOps | CI/CD | GitHub Actions | Terraform | Infrastructure as Code | Kubernetes | Docker | Monitoring | Cloud Security

Posted 1 month ago

Apply

5.0 - 9.0 years

12 - 22 Lacs

Hyderabad, Bengaluru

Hybrid

Position : PySpark Data Engineer Location : Bangalore / Hyderabad Experience : 5 to 9 Yrs Job Type : On Role Job Description: PySpark Data Engineer:- 1. API Development : Design, develop, and maintain robust APIs using FastAPI and RESTful principles for scalable backend systems. 2. Big Data Processing : Leverage PySpark to process and analyze large datasets efficiently, ensuring optimal performance in big data environments. 3. Full-Stack Integration : Develop seamless backend-to-frontend feature integrations, collaborating with front-end developers for cohesive user experiences. 4. CI/CD Pipelines : Implement and manage CI/CD pipelines using GitHub Actions and Azure DevOps to streamline deployments and ensure system reliability. 5. Containerization : Utilize Docker for building and deploying containerized applications in development and production environments. 6. Team Leadership : Lead and mentor a team of developers, providing guidance, code reviews, and support to junior team members to ensure high-quality deliverables. 7. Code Optimization : Write clean, maintainable, and efficient Python code, with a focus on scalability, reusability, and performance. 8. Cloud Deployment : Deploy and manage applications on cloud platforms like Azure , ensuring high availability and fault tolerance. 9. Collaboration : Work closely with cross-functional teams, including product managers and designers, to translate business requirements into technical solutions. 10. Documentation : Maintain thorough documentation for APIs, processes, and systems to ensure transparency and ease of maintenance. Highlighted Skillset:- Big Data : Strong PySpark skills for processing large datasets. DevOps : Proficiency in GitHub Actions , CI/CD pipelines , Azure DevOps , and Docker . Integration : Experience in backend-to-frontend feature connectivity. Leadership : Proven ability to lead and mentor development teams. Cloud : Knowledge of deploying and managing applications in Azure or other cloud environments. Team Collaboration : Strong interpersonal and communication skills for working in cross-functional teams. Best Practices : Emphasis on clean code, performance optimization, and robust documentation. Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -

Posted 1 month ago

Apply

3.0 - 5.0 years

1 - 3 Lacs

Chennai

Work from Office

**AWS Infrastructure Management:** Design, implement, and maintain scalable, secure cloud infrastructure using AWS services (EC2, Lambda, S3, RDS, Cloud Formation/Terraform, etc.) Monitor and optimize cloud resource usage and costs **CI/CD Pipeline Automation:** Set up and maintain robust CI/CD pipelines using tools such as GitHub Actions, GitLab CI, Jenkins, or AWS Code Pipeline Ensure smooth deployment processes for staging and production environments **Git Workflow Management:** Implement and enforce best practices for version control and branching strategies (Gitflow, trunk-based development, etc.) Support development teams in resolving Git issues and improving workflows **Twilio Integration & Support:** Manage and maintain Twilio-based communication systems (SMS, Voice, WhatsApp, Programmable Messaging) Develop and deploy Twilio Functions and Studio Flows for customer engagement Monitor communication systems and troubleshoot delivery or quality issues **Infrastructure as Code & Automation:** Use tools like Terraform, Cloud Formation, or Pulumi for reproducible infrastructure Create scripts and automation tools to streamline routine DevOps tasks **Monitoring, Logging & Security:** Implement and maintain monitoring/logging tools (Cloud Watch, Datadog, ELK, etc.) Ensure adherence to best practices around IAM, secrets management, and compliance **Requirements** 3-5+ years of experience in DevOps or a similar role Expert-level experience with **Amazon Web Services (AWS)** Strong command of **Git** and Git-based CI/CD practices Experience building and supporting solutions using **Twilio APIs** (SMS, Voice, Programmable Messaging, etc.) Proficiency in scripting languages (Bash, Python, etc.) Hands-on experience with containerization (Docker) and orchestration tools (ECS, EKS, Kubernetes) Familiarity with Agile/Scrum workflows and collaborative development environments **Preferred Qualifications** AWS Certifications (e.g., Solutions Architect, DevOps Engineer) Experience with serverless frameworks and event-driven architectures Previous work with other communication platforms (e.g., SendGrid, Nexmo) a plus Knowledge of RESTful API development and integration Experience working in high-availability, production-grade systems

Posted 1 month ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Pune

Work from Office

What You'll Do Job Title: Security Automation Engineer Integrated Engineering Systems Location: #LI-Hybrid Eligibility: 23years of software engineering experience Avalara is looking for a Security Automation Engineer to join our Integrated Engineering Systems team. In this role, youll build and scale automated security tooling and integrate scanning pipelines into Avalaras core engineering systems. You will work closely with platform engineers, product teams, and DevSecOps to design scalable services and analytics dashboards that detect, track, and remediate vulnerabilities. This role is perfect for engineers who are passionate about security through automation , scaling secure development practices, and enabling teams to build safer software faster. What Your Responsibilities Will Be Design, develop, and maintain microservices and security automation pipelines that integrate into Avalaras CI/CD and engineering systems. Build tools and services in GoLang to automate SAST, DAST, and SCA scanning workflows. Build internal tooling to identify gaps in security coverage, automate remediation recommendations. Partner with service owners to provide secure development guidance, build remediation playbooks, and enforce policy via automation. Implement dashboards using Snowflake, Hex, and Grafana to ingest and analyse security data, monitor pipeline health and provide real-time visibility into scan reliability and security metrics for both engineering teams and leadership. What You'll Need to be Successful Core Qualifications B.Tech or B.E in Computer Science, Engineering, Math, or a related technical discipline. 25 years of software engineering experience, with 2 years of direct experience in platform security or DevSecOps teams. Proficiency in Golang, Python, Java, or .NET, with ability to write clean, secure, and maintainable code. Understanding of OWASP Top 10, CWE Top 25, and secure software development practices. Experience with integrating and operating SAST, DAST, and SCA tools in CI/CD pipelines (e.g., GitHub Actions, Jenkins, GitLab). Knowledge of AWS or GCP security services and infrastructure-as-code best practices. Preferred Bonus Qualifications: Proven hands-on experience with Snowflake, Hex, and Grafana to build observability dashboards with alerts and SLA tracking. Security certifications.

Posted 1 month ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Job Summary: We are looking for a skilled and proactive DevOps Engineer with 2+ years of experience in managing and automating cloud infrastructure, ensuring deployment security, and supporting CI/CD pipelines. The ideal candidate is proficient in tools like Docker, Kubernetes, Terraform, and has hands-on experience with observability stacks such as Prometheus and Grafana. You will work closely with engineering teams to maintain uptime for media services, support ML model pipelines, and drive full-cycle Dev & Ops best practices. Key Responsibilities: Design, deploy, and manage containerized applications using Docker and Kubernetes. Automate infrastructure provisioning and management using Terraform on AWS or GCP. Implement and maintain CI/CD pipelines with tools like Jenkins, ArgoCD, or GitHub Actions. Set up and manage monitoring, logging, and alerting systems using Prometheus, Grafana, and related tools. Ensure high availability and uptime for critical services, including media processing pipelines and APIs. Collaborate with development and ML teams to support model deployment workflows and infrastructure needs. Drive secure deployment practices, access control, and environment isolation. Troubleshoot production issues and participate in on-call rotations where required. Contribute to documentation and DevOps process optimization for better agility and resilience. Qualifications: 2+ years of experience in DevOps, SRE, or cloud infrastructure roles. Hands-on experience with Docker, Kubernetes, and Terraform. Solid knowledge of CI/CD tooling (e.g., Jenkins, ArgoCD, GitHub Actions). Experience with observability tools such as Prometheus and Grafana. Familiarity with AWS or GCP infrastructure, including networking, compute, and IAM. Strong understanding of deployment security, versioning, and full lifecycle support. Preferred Qualifications : Experience supporting media pipelines or AI/ML model deployment infrastructure. Understanding of DevSecOps practices and container security tools (e.g., Trivy, Aqua). Scripting skills (Bash, Python) for automation and tooling. Experience in managing incident response and performance optimization. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India

Posted 1 month ago

Apply

5.0 - 8.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Responsibilities: Responsible for building and maintaining an innovative Web application Analyse requirements, prepare High level/low-level designs, and realize it with project team. Lead a team of React.Js and Node.Js software engineers, take the delivery responsibility of the team. Ensures quality of deliverables from team by doing stringent reviews and coaching the team. Provide the estimates for complex and large projects, support Project manager to arrive at project planning. Forms the bridge between the Software Engineers and Solution Analysts, IT architects. Discusses technical topics (with the SEs, as a specialist), as well as holistic, architecture topics (with the IT Architects, as a generalist). Translates complex content to different stakeholders, both technical (like software engineers) as well as functional (business), in a convincing and well-founded way, and adapted to the target audience. Works in a support environment, eye for details and keen on optimizations. Profile Description: Able to take care of all responsibilities mentioned in above section. 5 to 8 years of experience in working on Full stack Web applications, of which at least 6 years in React.js and Node.Js. Minimum 6 years of experience of React.Js and Node.js (be able to demonstrate contribution to a product build) Good knowledge on building reusable web components Experience with deploying software CICD approach, Able to write well-documented and clean Typescript code. Affinity with maintaining and evolving a codebase to nourish high-quality code. Knowing how to make the app accessible to all users is an expectation knowledge of the underlying framework Fastify and Remix Experience with automated testing suites like Jest Familiar with one or more CI/CD environments: Gitlab CI, Github Actions, Circle CI, etc Strong problem-solving and critical-thinking abilities Good communication skills that facilitate interaction Confident, detail oriented and highly motivated to be part of a high-performing team A positive mindset and continuous learning attitude You have the ability to work under pressure and adhere to tight deadlines Familiar with SCRUM and Agile collaborations. Ensures compliance of project deliverables in line with Project Management methodologies. Exchanging expertise with other team members(knowledge sharing) Strong customer affinity, to deliver highly performant applications & quick turn around in bug fixes. Work in project teams and go for your success as a team.lead by example to drive the success of the team on a technical level. Is willing to work in both Projects and Maintenance activities. Open for travel to Belgium Nice to have Competencies: Working experience in a SAFe environment is a plus. Experience working with European clients

Posted 1 month ago

Apply

3.0 - 6.0 years

6 - 16 Lacs

Pune

Work from Office

Skills: Performance Testing, Databricks Pipeline Key Responsibilities: Design and execute performance testing strategies specifically for Databricks-based data pipelines. Identify performance bottlenecks and provide optimization recommendations across Spark/Databricks workloads. Collaborate with development and DevOps teams to integrate performance testing into CI/CD pipelines. Analyze job execution metrics, cluster utilization, memory/storage usage, and latency across various stages of data pipeline processing. Create and maintain performance test scripts, frameworks, and dashboards using tools like JMeter, Locust, or custom Python utilities. Generate detailed performance reports and suggest tuning at the code, configuration, and platform levels. Conduct root cause analysis for slow-running ETL/ELT jobs and recommend remediation steps. Participate in production issue resolution related to performance and contribute to RCA documentation. Technical Skills: Mandatory: Strong understanding of Databricks, Apache Spark, and performance tuning techniques for distributed data processing systems. Hands-on experience in Spark (PySpark/Scala) performance profiling, partitioning strategies, and job parallelization. 2+ years of experience in performance testing and load simulation of data pipelines. Solid skills in SQL, Snowflake, and analyzing performance via query plans and optimization hints. Familiarity with Azure Databricks, Azure Monitor, Log Analytics, or similar observability tools. Proficient in scripting (Python/Shell) for test automation and pipeline instrumentation. Experience with DevOps tools such as Azure DevOps, GitHub Actions, or Jenkins for automated testing. Comfortable working in Unix/Linux environments and writing shell scripts for monitoring and debugging. Good to Have: Experience with job schedulers like Control-M, Autosys, or Azure Data Factory trigger flows. Exposure to CI/CD integration for automated performance validation. Understanding of network/storage I/O tuning parameters in cloud-based environments.

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 25 Lacs

Hyderabad, Bengaluru

Work from Office

Urgent Hiring for PySpark Data Engineer:- Job Location- Bangalore and Hyderabad Exp- 5yrs-9yrs Share CV Mohini.sharma@adecco.com OR Call 9740521948 Job Description: 1. API Development : Design, develop, and maintain robust APIs using FastAPI and RESTful principles for scalable backend systems. 2. Big Data Processing : Leverage PySpark to process and analyze large datasets efficiently, ensuring optimal performance in big data environments. 3. Full-Stack Integration : Develop seamless backend-to-frontend feature integrations, collaborating with front-end developers for cohesive user experiences. 4. CI/CD Pipelines : Implement and manage CI/CD pipelines using GitHub Actions and Azure DevOps to streamline deployments and ensure system reliability. 5. Containerization : Utilize Docker for building and deploying containerized applications in development and production environments. 6. Team Leadership : Lead and mentor a team of developers, providing guidance, code reviews, and support to junior team members to ensure high-quality deliverables. 7. Code Optimization : Write clean, maintainable, and efficient Python code, with a focus on scalability, reusability, and performance. 8. Cloud Deployment : Deploy and manage applications on cloud platforms like Azure , ensuring high availability and fault tolerance. 9. Collaboration : Work closely with cross-functional teams, including product managers and designers, to translate business requirements into technical solutions. 10. Documentation : Maintain thorough documentation for APIs, processes, and systems to ensure transparency and ease of maintenance Highlighted Skillset:- Big Data : Strong PySpark skills for processing large datasets. DevOps : Proficiency in GitHub Actions , CI/CD pipelines , Azure DevOps , and Docker . Integration : Experience in backend-to-frontend feature connectivity. Leadership : Proven ability to lead and mentor development teams. Cloud : Knowledge of deploying and managing applications in Azure or other cloud environments. Team Collaboration : Strong interpersonal and communication skills for working in cross-functional teams. Best Practices : Emphasis on clean code, performance optimization, and robust documentation

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 25 Lacs

Hyderabad, Bengaluru

Work from Office

PySpark Data Engineer:- Job Description: 1. API Development : Design, develop, and maintain robust APIs using FastAPI and RESTful principles for scalable backend systems. 2. Big Data Processing : Leverage PySpark to process and analyze large datasets efficiently, ensuring optimal performance in big data environments. 3. Full-Stack Integration : Develop seamless backend-to-frontend feature integrations, collaborating with front-end developers for cohesive user experiences. 4. CI/CD Pipelines : Implement and manage CI/CD pipelines using GitHub Actions and Azure DevOps to streamline deployments and ensure system reliability. 5. Containerization : Utilize Docker for building and deploying containerized applications in development and production environments. 6. Team Leadership : Lead and mentor a team of developers, providing guidance, code reviews, and support to junior team members to ensure high-quality deliverables. 7. Code Optimization : Write clean, maintainable, and efficient Python code, with a focus on scalability, reusability, and performance. 8. Cloud Deployment : Deploy and manage applications on cloud platforms like Azure , ensuring high availability and fault tolerance. 9. Collaboration : Work closely with cross-functional teams, including product managers and designers, to translate business requirements into technical solutions. 10. Documentation : Maintain thorough documentation for APIs, processes, and systems to ensure transparency and ease of maintenance Highlighted Skillset:- Big Data : Strong PySpark skills for processing large datasets. DevOps : Proficiency in GitHub Actions , CI/CD pipelines , Azure DevOps , and Docker . Integration : Experience in backend-to-frontend feature connectivity. Leadership : Proven ability to lead and mentor development teams. Cloud : Knowledge of deploying and managing applications in Azure or other cloud environments. Team Collaboration : Strong interpersonal and communication skills for working in cross-functional teams. Best Practices : Emphasis on clean code, performance optimization, and robust documentation Share updated resume at siddhi.pandey@adecco.com or whatsapp at 6366783349

Posted 1 month ago

Apply

0.0 years

0 Lacs

Remote, , India

On-site

Sr. Azure Cloud Engineer Location: India We are seeking an experienced Azure Cloud Engineer who specializes in migrating and modernizing applications to the cloud. The ideal candidate will have deep expertise in Azure Cloud, Terraform (Enterprise), containers (Docker), Kubernetes (AKS), CI/CD with GitHub Actions, and Python scripting . Strong soft skills are essential to communicate effectively with technical and non-technical stakeholders during migration and modernization projects. Key Responsibilities: Lead and execute the migration and modernization of applications to Azure Cloud using containerization and re-platforming. Re-platform, optimize, and manage containerized applications using Docker and orchestrate through Azure Kubernetes Service (AKS) . Implement and maintain robust CI/CD pipelines using GitHub Actions to facilitate seamless application migration and deployment. Automate infrastructure and application deployments to ensure consistent, reliable, and scalable cloud environments. Write Python scripts to support migration automation, integration tasks, and tooling. Collaborate closely with cross-functional teams to ensure successful application migration, modernization, and adoption of cloud solutions. Define and implement best practices for DevOps, security, migration strategies, and the software development lifecycle (SDLC). Infrastructure deployment via Terraform (IAM, networking, security, etc) Non-Functional Responsibilities: Configure and manage comprehensive logging, monitoring, and observability solutions. Develop, test, and maintain Disaster Recovery (DR) plans and backup solutions to ensure cloud resilience. Ensure adherence to all applicable non-functional requirements, including performance, scalability, reliability, and security during migrations. Required Skills and Experience: Expert-level proficiency in migrating and modernizing applications to Microsoft Azure Cloud services. Strong expertise in Terraform (Enterprise) for infrastructure automation. Proven experience with containerization technologies (Docker) and orchestration platforms (AKS). Extensive hands-on experience with GitHub Actions and building CI/CD pipelines specifically for cloud migration and modernization efforts. Proficient scripting skills in Python for automation and tooling. Comprehensive understanding of DevOps methodologies and software development lifecycle (SDLC). Excellent communication, interpersonal, and collaboration skills. Demonstrable experience in implementing logging, monitoring, backups, and disaster recovery solutions within cloud environments

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

We are seeking a Senior DevOps Engineer to build pipeline automation, integrating DevSecOps principles and operations of product build and releases. Mentor and guide DevOps teams, fostering a culture of technical excellence and continuous learning. What You'll Do Design & Architecture: Architect and implement scalable, resilient, and secure Kubernetes-based solutions on Amazon EKS. Deployment & Management: Deploy and manage containerized applications, ensuring high availability, performance, and security. Infrastructure as Code (IaC): Develop and maintain Terraform scripts for provisioning cloud infrastructure and Kubernetes resources. CI/CD Pipelines: Design and optimize CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD along with automated builds, tests (unit, regression), and deployments. Monitoring & Logging: Implement monitoring, logging, and alerting solutions using Prometheus, Grafana, ELK stack, or CloudWatch. Security & Compliance: Ensure security best practices in Kubernetes, including RBAC, IAM policies, network policies, and vulnerability scanning. Automation & Scripting: Automate operational tasks using Bash, Python, or Go for improved efficiency. Performance Optimization: Tune Kubernetes workloads and optimize cost/performance of Amazon EKS clusters. Test Automation & Regression Pipelines - Integrate automated regression testing and build sanity checks into pipelines to ensure high-quality releases. Security & Resource Optimization - Manage Kubernetes security (RBAC, network policies) and optimize resource usage with Horizontal Pod Autoscalers (HPA) and Vertical Pod Autoscalers (VPA) . Collaboration: Work closely with development, security, and infrastructure teams to enhance DevOps processes. Minimum Qualifications Bachelor's degree (or above) in Engineering/Computer Science. 8+ years of experience in DevOps, Cloud, and Infrastructure Automation in a DevOps engineer role. Expertise with Helm charts, Kubernetes Operators, and Service Mesh (Istio, Linkerd, etc.) Strong expertise in Amazon EKS and Kubernetes (design, deployment, and management) Expertise in Terraform, Jenkins and Ansible Expertise with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD, etc.) Strong experience with monitoring and logging tools (Prometheus, Grafana, ELK, CloudWatch) Proficiency in Bash, Python, for automation and scripting

Posted 1 month ago

Apply

4.0 - 8.0 years

13 - 18 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

PreAuth & DRG AI Transformation Hybrid mode Location - Mumbai, Pune, Goa, Nagpur, Indore, Ahmedabad, Noida Gurgaon, Bangalore, Hyderabad, Chennai, Jaipur, Kolkata, Kochi Experience:- 4- 8years Primary Responsibilities: Collaborate with developers, Mangers and other stake holders to understand feature requirements. Design and execute detailed test cases and perform various testing types like functional, regression, Integration, System, Smoke and Sanity testing. Design and execute API testing Build and maintain automated test suites using frameworks like Selenium, TestNG. Log and track defects with comprehensive details Collaborate with developers to ensure timely resolution and resting of bugs. Must Have Skill: Manual Testing, Automation Tools: Selenium, TestNG API Testing Tools: Postman, SoapUI Testing Management and Defect tracking: JIRA, TestRail Nice To Have Skills: Jenkins, GitHub Actions, JMeter

Posted 1 month ago

Apply

5.0 - 7.0 years

15 - 18 Lacs

Thane, Mumbai (All Areas)

Work from Office

As a Software Developer, you will be responsible for the design, development, microservices, along with the implementation of desktop user interfaces. to deployment ensuring reliability, security, and maintainability of the codebase.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Gurgaon Payroll: BCforward Work Mode: Hybrid JD QA - Selenium, Playwright, Rest Assured, GitHub Actions, SQL Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 15-Days joiners at most. All the best

Posted 1 month ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Devops Engineer(I04): AT&T is one of the leading service providers in the telecommunication sector and propelling it into the data and AI driven era is powered by CDO (Chief Data Office). CDO is empowering AT&T, through execution, self-service, and as a data and AI center of excellence, to unlock transformative insights and actions that drive value for the company and its customers. Employees at CDO imagine, innovate, and unlock data & AI driven insights and actions that create value for our customers and the enterprise. Part of the work, we govern data collection and use, mitigate for potential bias in machine learning models, and encourage an enterprise culture of responsible AI. AT&T s Chief Data Office (CDO) is harnessing data and making AT&T s data assets and ground-breaking AI functionality accessible to employees across the firm. In addition, our talented employees are a significant component that contributes to AT&T s place as the U.S. company with the sixth most AI-related patents. CDO also maintains academic and tech partnerships to cultivate the next generation of experts in statistics and machine learning, statistical computing, data visualization, text mining, time series modelling, data stream and database management, data quality and anomaly detection, data privacy, and more. Job Description: DevOps Engineer Position Overview: We are seeking a skilled and passionate DevOps Engineer to join our dynamic team. The ideal candidate will possess extensive experience in Linux systems, cloud platforms, automation tools, and a variety of scripting and coding languages. As a DevOps Engineer, you will be responsible for the design, development, security compliance, and reoccurring maintenance of our infrastructure, ensuring robust, scalable, and secure solutions. The ideal candidate should excel in a dynamic business setting, effectively manage multiple projects, demonstrate a willingness to learn, possess self-motivation, and work collaboratively as part of a team. Key Responsibilities: This role requires proficiency in working within both Azure cloud and on-premises Linux environments, along with a solid understanding of cloud security and cloud administration. The candidate will be responsible for performing recurring and ad-hoc security compliance, image updates, and security remediation. DevOps responsibilities will include building and maintaining CI/CD pipelines using tools such as Terraform, Ansible, or Jenkins, as well as implementing updates to existing deployment pipelines and handling the deployments. Strong scripting skills are essential, with experience in languages such as Perl, Python, and Bash. Familiarity with containerization platforms like OpenShift or Kubernetes in a Linux environment is highly preferred. Knowledge of database deployment and support within cloud environments is also desirable. Desired Skills & Qualifications: Advanced knowledge and hands-on experience with Redhat, Rocky, and Ubuntu. Strong understanding of Azure security, administration, architecture, ADO Pipeline CI/CD, Terraform, GitHub Actions, Powershell, Databricks, Snowflake, EventHubs. Experience with Ansible and Jenkins for automated deployments. Expertise in KSH/BASH, Python, C/C++, and JAVA. Proficient in using Visual Studio Code, GitHub, JFROG Proficient in RHEL KVM, Openshift, Kubernetes, REDIS, and KAFKA. Strong problem-solving skills and the ability to troubleshoot complex issues. Excellent communication and collaboration skills to work effectively within a team. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Flexible to work from office 3 days(in a week) from 12:30pm to 9:30pm Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Req ID: 327620 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Devops with .net framework to join our team in Chennai, Tamil N?du (IN-TN), India (IN). Key Responsibilities: Design, build, and maintain CI/CD pipelines for .NET applications using tools like Azure DevOps, GitHub Actions, or Jenkins Automate build, test, and deployment processes with a strong emphasis on security ad reliability Collaborate with software engineers and QA teams to ensure automated testing and code coverage practices are embedded into the pipelines Monitor and troubleshoot build failures and deployment issues Manage build artifacts, versioning strategies, and release orchestration Integrate static code analysis, security scanning (SAST/DAST), and compliance checks into the pipelines Support infrastructure-as-code deployments using tools like Terraform , Azure Bicep , or ARM templates Maintain and improve documentation for build/release processes, infrastructure, and tooling standards Contribute to DevOps best practices and help shape our CI/CD strategy as we move toward cloud-native architecture and Cloud 3.0 adoption Qualifications: Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience) 3+ years of experience in DevOps, CI/CD, or build and release engineering Hands-on experience with .NET Core / .NET Framework build and deployment processes Strong experience with Azure DevOps Pipelines (YAML and Classic) Familiarity with Git , NuGet , NUnit/xUnit , SonarQube , OWASP/ZAP , etc. Experience deploying to Azure App Services, Azure Kubernetes Service (AKS) , or Azure Functions Experience with Docker and container-based deployments Working knowledge of infrastructure-as-code (Terraform, Bicep, or similar) Understanding of release management and software development lifecycle (SDLC) best practices Excellent problem-solving, collaboration, and communication skills Can you walk us through how you would design a CI/CD pipeline for a .NET application using Azure DevOps from build to deployment How do you handle secrets and sensitive configuration values in a CI/CD pipeline Have you ever had to troubleshoot a failing deployment in production What tools did you use to diagnose the issue, and how did you resolve it About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and streaming pipelines using services like S3, Glue, Redshift, Athena, EMR, Lake Formation, and Kinesis. Develop and orchestrate ETL/ELT pipelines Required Candidate profile Participate in pre-sales and consulting activities such as: Engaging with clients to gather requirements and propose AWS-based data engineering solutions. Supporting RFPs/RFIs, technical proposals

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies