Jobs
Interviews

117 Glm Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

12 - 16 Lacs

Pune

Work from Office

We are on the lookout for a hands-on DevOps / SRE expert who thrives in a dynamic, cloud-native environment! Join a high-impact project where your infrastructure and reliability skills will shine.. Key Responsibilities. Design & implement resilient deployment strategies (Blue-Green, Canary, GitOps). Manage observability tools: logs, metrics, traces, and alerts. Tune backend services & GKE workloads (Node.js, Django, Go, Java). Build & manage Terraform infra (VPC, CloudSQL, Pub/Sub, Secrets). Lead incident responses & perform root cause analyses. Standardize secrets, tagging & infra consistency across environments. Enhance CI/CD pipelines & collaborate on better rollout strategies. Must-Have Skills. 510 years in DevOps / SRE / Infra roles. Kubernetes (GKE preferred). IaC with Terraform & Helm. CI/CD: GitHub Actions + GitOps (ArgoCD / Flux). Cloud architecture expertise (IAM, VPC, Secrets). Strong scripting/coding & backend debugging skills (Node.js, Django, etc.) ?. Incident management with tools like Datadog & PagerDuty. Excellent communicator & documenter. Tech Stack. GKE, Kubernetes, Terraform, Helm. GitHub Actions, ArgoCD / Flux. Datadog, PagerDuty. CloudSQL, Cloudflare, IAM, Secrets. You're. A proactive team player & strong individual contributor. Confident yet humble. Curious, driven & always learning. Not afraid to solve deep infrastructure challenges. (ref:hirist.tech). Show more Show less

Posted 1 month ago

Apply

6.0 - 11.0 years

27 - 32 Lacs

Pune

Work from Office

Job Title: Service Operations Specialist AVP Location: Pune, India Role Description Private Bank Germany Service Operations - provides 2nd Level Application Support for business applications used in branches, by mobile sales or via internet. The department is responsible for the stability of the applications. Incident Management and Problem Management are the main processes that account for the required stability. In-depth application knowledge and understanding of the business processes that the applications support are our main assets. Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Flexible working arrangements Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Experience: 10+ years Monitor production systems for performance, availability, and anomalies. Collaborate with development teams for bug fixes and enhancements. Provide application support by handling and consulting on BAU, Incidents/emails/alerts for the respective applications. Act as an escalation point for user issues and requests and from Level 1/L2 support. Report issues to management. Manage and mentor regional L2 team to ensure the team is up to speed and picks up the support duties. Gain detailed knowledge of all business flows, the application architecture, and the hardware configuration for supported applications. Define, document, and maintain procedures, SLAs, and knowledge base to support the platforms to ensure consistent service levels are achieved across the global support team. Build and maintain effective and productive relationships with the stakeholders in business, development, infrastructure, and third-party systems / data providers. Manage incidents through resolution, keeping all stakeholders abreast of the situation and working to minimize impact wherever possible. Conduct post-mortems of incidents and drive relevant feedback into Incident, Problem and Change management programs. Facilitate coordination across L1/L2 and L3/Engineering teams to investigate and resolve an ongoing infrastructure/platform or application issue impacting multiple business lines. Drive the development and implementation of the tools and best practices needed to provide effective support. Collaborate with and deliver initiatives and install these initiatives to drive stability in the environment. Assist in the process to approve all new releases and production configuration changes; ensure development includes all necessary documentation for each change and conduct post-release testing where required. Perform reviews of all open production items with the development team and push for updates and resolutions to outstanding tasks and reoccurring issues. Regularly review and analyze service requests and issues that are raised; seek to improve the process and remove reoccurring tasks where possible. Perform reviews of existing monitoring for the platform and make improvements where possible. The candidate will have to work in shifts as part of a Rota covering EMEA hours and in the event of major outages or issues we may ask for flexibility to help provide appropriate cover. Your skills and experience Business and Technical competency: Hands on experience in Banking domain and technology. Credit card business and operations knowledge is a must. Technologies: Hands-on experience with log analyser such as Splunk (mainly) Knowledge in container platforms like Kubernetes / OpenShift/GKE Knowledge in Observability tool like New Relic Hands on experience in job scheduling tools, sqls/ oracle DB etc. Strong understanding of SOAP & REST API Technologies Knowledge in IBM MQ & SFTP will be added advantage. Basic understanding on HELM, GITHUB. Incident and Operations Management: Strong knowledge in incident management processes and various ITIL concepts. Strong skills in application monitoring and performance, troubleshooting, and root cause analysis. Soft Skills: Excellent problem-solving abilities in high-pressure scenarios. Strong communication skills to work effectively with stakeholders and cross-functional teams. Ability to prioritize tasks and manage time effectively in a fast-paced environment. English language skills mandatory, German CEFR A1 level preferred (highly desirable) Education Bachelors degree from an accredited college or university with a concentration in IT or Computer Science related discipline (equivalent diploma or technical faculty)

Posted 1 month ago

Apply

10.0 - 15.0 years

20 - 25 Lacs

Pune

Work from Office

Role Overview:As a Senior Principal Software Engineer, you will be a key technical leader responsible for shaping the design and development of scalable, reliable, and innovative AI/GenAI solutions. You will lead high priority projects, set technical direction for teams, and ensure alignment with organizational goals. Thisrole demands a high degree of technical expertise, strategic thinking, and the ability to collaborate effectively across diverse teams while mentoring and elevating others to meet a very high technical bar. Key Responsibilities: Strategic Technical Leadership : Define and drive the technical vision and roadmap for AI/GenAI systems, aligning with company objectives and future growth. Provide architectural leadership for complex, large-scale AI systems, ensuring scalability, performance, and maintainability. Act as a thought leader in AI technologies, influencing cross-functional technical decisions and long-term strategies. Advanced AI Product Development: Lead the development of state-of-the-art generative AI solutions, leveraging advanced techniques such as transformer models, diffusion models, and multi-modal architectures. Drive innovation by exploring and integrating emerging AI technologies and best practices. Mentorship & Team Growth: Mentor senior and junior engineers, fostering a culture of continuous learning and technical excellence. Elevate the team’s capabilities through coaching, training, and providing guidance on best practices and complex problem-solving. End-to-End Ownership: Take full ownership of high-impact projects, from ideation and design to implementation, deployment, and monitoring in production. Ensure the successful delivery of projects with a focus on quality, timelines, and alignment with organizational goals. Collaboration & Influence: Collaborate with cross-functional teams, including product managers, data scientists, and engineering leadership, to deliver cohesive and impactful solutions. Act as a trusted advisor to stakeholders, clearly articulating technical decisions and their business impact. Operational Excellence: Champion best practices for software development, CI/CD, and DevOps, ensuring robust and reliable systems. Monitor and improve the health of deployed services, conducting root cause analyses and driving preventive measures for long-term reliability. Innovation & Continuous Improvement: Advocate for and lead the adoption of new tools, frameworks, and methodologies to enhance team productivity and product capabilities. Stay at the forefront of AI/GenAI research, driving thought leadership and contributing to the AI community through publications or speaking engagements. Minimum Qualifications: Educational BackgroundBachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field; Ph.D. is preferred but not required. Experience10+ years of professional software development experience, including 5+ years in AI/ML or GenAI. Proven track record of designing and deploying scalable, production-grade AI solutions. Deep expertise in Python and frameworks such as TensorFlow, PyTorch, FastAPI, and LangChain. Advanced knowledge of AI/ML algorithms, generative models, and LLMs. Proficiency with cloud platforms (e.g., GCP, AWS, Azure) and modern DevOps practices. Strong understanding of distributed systems, microservices architecture, and database systems (SQL/NoSQL). Leadership Skills: Demonstrated ability to lead complex technical initiatives, influence cross functional teams, and mentor engineers at all levels. Problem-Solving Skills: Exceptional analytical and problem-solving skills, with a proven ability to navigate ambiguity and deliver impactful solutions. CollaborationExcellent communication and interpersonal skills, with the ability to engage and inspire both technical and non-technical stakeholders. Preferred Qualifications: AI/ML ExpertiseExperience with multi-modal models, reinforcement learning, and responsible AI principles. Cloud & InfrastructureAdvanced knowledge of GCP technologies such as VertexAI, BigQuery,GKE, and DataFlow. Thought LeadershipContributions to the AI/ML community through publications, open-source projects, or speaking engagements. Agile ExperienceFamiliarity with agile methodologies and working in a DevOps model. Disability Accommodation: UKGCareers@ukg.com.

Posted 1 month ago

Apply

10.0 - 15.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Key Responsibilities: Develop scalable, secure, and high-performance web applications using Java on the back end and modern front-end frameworks. Work closely with product owners, UI/UX designers, and other developers to implement user-friendly features and functionality. Design and implement RESTful APIs and integrate third-party services. Leverage GCP services (e.g., App Engine, Cloud Functions, Cloud Run, Pub/Sub, Cloud Storage) for cloud-native architecture. Build and maintain reusable code and libraries for future use. Write unit, integration, and end-to-end tests to ensure code quality. Participate in code reviews and agile development ceremonies (sprint planning, retrospectives, etc.). Implement CI/CD pipelines using GCP tools (Cloud Build, Cloud Deploy) or other tools like Jenkins, GitHub Actions, etc. Collaborate with DevOps teams to deploy applications to cloud platforms. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or related field. 5-8 years of experience as a full stack developer. Strong back-end experience in Java (Java 8 or higher), Spring Boot, Hibernate/JPA. Proficient in front-end technologies such as HTML5, CSS3, JavaScript. Hands-on experience with modern JavaScript frameworks (React.js, Angular, or Vue.js). Knowledge of RESTful APIs and Microservices architecture. Experience with relational databases like MySQL, PostgreSQL, or Oracle. Familiar with build tools and version control (Maven/Gradle, Git). Experience in building CI/CD pipelines and deployment practices. Experience in unit testing frameworks (JUnit, Mockito, Jasmine, etc.). Experience with Kubernetes (GKE preferred). Experience with serverless computing and microservices architecture. GCP certification (Associate Cloud Engineer, Professional Cloud Developer, etc.) desirable

Posted 1 month ago

Apply

3.0 - 7.0 years

37 - 40 Lacs

Bengaluru

Work from Office

: Job TitleDevOps Engineer, AS LocationBangalore, India Role Description Deutsche Bank has set for itself ambitious goals in the areas of Sustainable Finance, ESG Risk Mitigation as well as Corporate Sustainability. As Climate Change throws new Challenges and opportunities, Bank has set out to invest in developing a Sustainability Technology Platform, Sustainability data products and various sustainability applications which will aid Banks goals. As part of this initiative, we are building an exciting global team of technologists who are passionate about Climate Change, want to contribute to greater good leveraging their Technology Skillset in Cloud / Hybrid Architecture. As part of this Role, we are seeking a highly skilled and experienced DevOps Engineer to join our growing team. In this role, you will play a pivotal role in managing and optimizing cloud infrastructure, facilitating continuous integration and delivery, and ensuring system reliability. What well offer you . 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Create, implement, and oversee scalable, secure, and cost-efficient cloud infrastructures on Google Cloud Platform (GCP). Utilize Infrastructure as Code (IaC) methodologies with tools such as Terraform, Deployment Manager, or alternatives. Implement robust security measures to ensure data access control and compliance with regulations. Adopt security best practices, establish IAM policies, and ensure adherence to both organizational and regulatory requirements. Set up and manage Virtual Private Clouds (VPCs), subnets, firewalls, VPNs, and interconnects to facilitate secure cloud networking. Establish continuous integration and continuous deployment (CI/CD) pipelines using Jenkins, GitHub Actions, or comparable tools for automated application deployments. Implement monitoring and alerting solutions through Stackdriver (Cloud Operations), Prometheus, or other third-party applications. Evaluate and optimize cloud expenditures by utilizing committed use discounts, autoscaling features, and resource rightsizing. Manage and deploy containerized applications through Google Kubernetes Engine (GKE) and Cloud Run. Deploy and manage GCP databases like Cloud SQL, BigQuery. Your skills and experience Minimum of 5+ years of experience in DevOps or similar roles with hands-on experience in GCP. In-depth knowledge of Google Cloud services (e.g., GCE, GKE, Cloud Functions, Cloud Run, Pub/Sub, BigQuery, Cloud Storage) and the ability to architect, deploy, and manage cloud-native applications. Proficient in using tools like Jenkins, GitLab, Terraform, Ansible, Docker, Kubernetes. Experience with Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or GCP-native Deployment Manager. Solid understanding of security protocols, IAM, networking, and compliance requirements within cloud environments. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How well support you . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Design, implement, and manage scalable, secure, and highly available infrastructure on GCP Automate infrastructure provisioning using tools like Terraform or Deployment Manager Build and manage CI/CD pipelines using Jenkins, GitLab CI, or similar tools Manage containerized applications using Kubernetes (GKE) and Docker Monitor system performance and troubleshoot infrastructure issues using tools like Stackdriver, Prometheus, or Grafana Implement security best practices across cloud infrastructure and deployments Collaborate with development and operations teams to streamline release processes Ensure high availability, disaster recovery, and backup strategies are in place Participate in performance tuning and cost optimization of GCP resources Strong hands-on experience with Google Cloud Platform (GCP) services Harness as an optional skill. Proficiency in Infrastructure as Code tools like Terraform or Google Deployment Manager Experience with Kubernetes (especially GKE) and Docker Knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI, or CircleCI Familiarity with scripting languages (e.g., Bash, Python) Experience with logging and monitoring tools (e.g., Stackdriver, Prometheus, ELK, Grafana) Understanding of networking, security, and IAM in a cloud environment Strong problem-solving and communication skills Experience in Agile environments and DevOps culture GCP Associate or Professional Cloud DevOps Engineer certification Experience with Helm, ArgoCD, or other GitOps tools Familiarity with other cloud platforms (AWS, Azure) is a plus Knowledge of application performance tuning and cost management on GCP

Posted 1 month ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Hyderabad

Work from Office

We are seeking a skilled and proactive DevOps Engineer with deep expertise in Google Cloud Platform (GCP), Google Kubernetes Engine (GKE), and on-premises Kubernetes platforms like OpenShift. The ideal candidate will have a strong foundation in Infrastructure as Code (IaC) using Terraform, and a solid understanding of cloud-native networking, service meshes (e.g., Istio), and CI/CD pipelines. Experience with DevSecOps practices and security tools is highly desirable. Key Responsibilities Design, implement, and manage scalable infrastructure on GCP (especially GKE google Kubernetes environment) and on-prem Kubernetes (OpenShift). Develop and maintain Terraform modules for infrastructure provisioning and configuration. Troubleshoot and resolve complex issues related to networking, Istio, and Kubernetes clusters. Build and maintain CI/CD pipelines using tools such as Jenkins, Codefresh, or GitHub Actions. Integrate and manage DevSecOps tools such as Blackduck, Checkmarx, Twistlock, and Dependabot to ensure secure software delivery. Collaborate with development and security teams to enforce security best practices across the SDLC. Support and configure WAFs and on-prem load balancers as needed. Required Skills & Qualifications: 5+ years of experience in a DevOps or Site Reliability Engineering role. Proficiency in GCP and GKE, with hands-on experience in OpenShift or similar on-prem Kubernetes platforms. Strong experience with Terraform and managing cloud infrastructure as code. Solid understanding of Kubernetes networking, Istio, and service mesh architectures. Experience with at least one CI/CD toolJenkins, Codefresh, or GitHub Actions. Familiarity with DevSecOps tools such as Black Duck, Checkmarx, Twistlock, and Dependabot. Strong Linux administration and scripting skills. Nice to Have: Experience with WAFs and on-prem load balancers. Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack, Dynatrace, Splunk). Knowledge of container security and vulnerability scanning best practices. Familiarity with GenAI, Google Vertex AI in Google Cloud.

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP S/4HANA for Product Compliance Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead end-to-end SAP EHS Global Label Management (GLM) implementations within S/4HANA Product Compliance projects. Youll be responsible for project delivery, stakeholder engagement, and ensuring regulatory alignment. Roles & Responsibilities:- Manage full-cycle implementation of SAP GLM- Define labelling strategies and oversee WWI template delivery- Coordinate cross-functional teams across product safety, compliance, and regulatory domains- Collaborate with clients and business users to gather requirements and translate them into effective EHS solutions.- Configure and maintain the SAP EHS Product Safety module, including specifications, phrase management, and data architecture.- Design and validate WWI report templates and guide ABAP developers with symbol logic, layout, and enhancements.- Implement and support SAP GLM (Global Label Management) including label determination, print control, and output conditions. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP EH&S GLM End to end implementation experience.- Deep expertise in SAP GLM, label determination logic, and print control setup- Strong knowledge of S/4HANA Product Compliance architecture- Excellent communication and team management skills- 8+ years in SAP EHS with 2+ full-cycle GLM implementations Additional Information:- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

7.0 - 9.0 years

9 - 13 Lacs

Hyderabad, Pune

Work from Office

Key Responsibilities: 1. Cloud Infrastructure Management:o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP).o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization:o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications.o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines:o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD.o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance:o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption.o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support:o Work closely with development teams to containerize applications and ensure smooth deployment on GCP.o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization:o Monitor and optimize GCP resource usage to ensure cost efficiency.o Implement strategies to reduce cloud spend without compromising performance. Required Skills and Qualifications:1. Certifications:o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise:o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools:o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build.o Experience with containerization tools like Docker. 4. Kubernetes Expertise:o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets.o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting:o Strong scripting skills in Python, Bash, or Go.o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging:o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking:o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers.8. Soft Skills: o Strong problem-solving and troubleshooting skills.o Excellent communication and collaboration abilities.o Ability to work in an agile, fast-paced environment.

Posted 1 month ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Key Responsibilities: 1. Cloud Infrastructure Management:o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP).o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization:o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications.o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines:o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD.o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance:o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption.o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support:o Work closely with development teams to containerize applications and ensure smooth deployment on GCP.o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization:o Monitor and optimize GCP resource usage to ensure cost efficiency.o Implement strategies to reduce cloud spend without compromising performance. Required Skills and Qualifications: 1. Certifications:o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise:o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools:o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build.o Experience with containerization tools like Docker. 4. Kubernetes Expertise:o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets.o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting:o Strong scripting skills in Python, Bash, or Go.o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging:o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking:o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers.8. Soft Skills: o Strong problem-solving and troubleshooting skills.o Excellent communication and collaboration abilities.o Ability to work in an agile, fast-paced environment.

Posted 1 month ago

Apply

3.0 - 7.0 years

7 - 11 Lacs

Noida, Bengaluru

Work from Office

Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology About the Role: As aSenior DevOps Engineer, you will be responsible for drivingcontinuous integration and deployment (CI/CD), managing cloud-based infrastructure, and ensuringhigh availability, scalability, and securityof our applications. You will work withautomation tools, observability solutions, and AI-driven enhancementsto optimize DevOps workflows and improve operational efficiency. This role requires deep expertise incloud cost optimization, infrastructure as code (IaC), monitoring, and zero-downtime deployments. Expectations/: 3-5 years of overall experience, with at least 3 years in DevOps. Cloud expertiseAWS (Preferred) / Azure / Google Cloud. Infrastructure as Code (IaC)Hands-on experience withTerraformfor cloud resource provisioning. CI/CD & Deployment Pipelines: Experience withJenkins, Bamboo, GitLab CI/CD. Expertise inZero Downtime Deploymentstrategies. Strong knowledge ofGitOpspractices usingArgoCD. Containerization & Orchestration: Experience withDocker & Kubernetes (EKS/GKE/AKS). Monitoring & Observability: Strong experience withELK Stack (Elasticsearch, Logstash, Kibana)for logging and observability. Experience withGrafana & Prometheusfor system monitoring. Messaging & Caching Systems: Experience managingKafka (production-level clusters). Hands-on expertise inRedis (production-level clusters). Scripting & Automation: Python scriptingfor automation (preferred). AI & Automation Tools: Experience leveragingAI tools (ChatGPT, GitHub Copilot, etc.)to automate routine DevOps tasks and improve productivity. Ability to integrate AI-driven solutions into DevOps workflows. Cloud Cost Optimization: Strong understanding ofcost-saving best practicesfor cloud infrastructure. Ability to mentor junior engineersand collaborate with cross-functional teams. Superpowers/Skills That Will Help You Succeed in This Role: High level ofdrive, initiative, and self-motivation. Strongproblem-solving skillswith agrowth mindset. Excellentcommunication and stakeholder management. Passion forautomation and AI-driven efficiencies. Willingness toexperiment, innovate, and continuously improve. Why Join Us Acollaborative, output-drivenenvironment that fosters innovation. Opportunities to work onlarge-scale, high-impact projects. A culture that valuestechnical excellence, automation, and efficiency. Respect that is earnedfrom peers and leadership based on contributions and impact. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 25 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!

Posted 1 month ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Pune

Work from Office

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we're only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that you're more than your work. That's why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose "” a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you're passionate about our purpose "” people "”then we can't wait to support whatever gives you purpose. We're united by purpose, inspired by you. UKG is a leader in the HCM space, and is at the forefront of artificial intelligence innovation, dedicated to developing cutting-edge generative AI solutions that transform the HR / HCM industry and enhance user experiences. We are seeking a talented and motivated AI Engineers to join our dynamic team and contribute to the development of next-generation AI/GenAI based products and solutions. This role will provide you with the opportunity to work on cutting-edge SaaS technologies and impactful projects that are used by enterprises and users worldwide. As a Senior Software Engineer, you will be involved in the design, development, testing, deployment, and maintenance of software solutions. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services. Responsibilities: Software DevelopmentWrite clean, maintainable, and efficient code or various software applications and systems. GenAI Product DevelopmentParticipate in the entire AI development lifecycle, including data collection, preprocessing, model training, evaluation, and deployment.Assist in researching and experimenting with state-of-the-art generative AI techniques to improve model performance and capabilities. Design and ArchitectureParticipate in design reviews with peers and stakeholders Code ReviewReview code developed by other developers, providing feedback adhering to industry standard best practices like coding guidelines TestingBuild testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and TroubleshootingTriage defects or customer reported issues, debug and resolve in a timely and efficient manner. Service Health and QualityContribute to health and quality of services and incidents, promptly identifying and escalating issues. Collaborate with the team in utilizing service health indicators and telemetry for action. Assist in conducting root cause analysis and implementing measures to prevent future recurrences. Dev Ops ModelUnderstanding of working in a DevOps Model. Begin to take ownership of working with product management on requirements to design, develop, test, deploy and maintain the software in production. DocumentationProperly document new features, enhancements or fixes to the product, and also contribute to training materials. Basic Qualifications: Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 2+ years of professional software development experience. Proficiency as a developer using Python, FastAPI, PyTest, Celery and other Python frameworks. Experience with software development practices and design patterns. Familiarity with version control systems like Git GitHub and bug/work tracking systems like JIRA. Basic understanding of cloud technologies and DevOps principles. Strong analytical and problem-solving skills, with a proven track record of building and shipping successful software products and services. Preferred Qualifications: Experience with object-oriented programming, concurrency, design patterns, and REST APIs. Experience with CI/CD tooling such as Terraform and GitHub Actions. High level familiarity with AI/ML, GenAI, and MLOps concepts. Familiarity with frameworks like LangChain and LangGraph. Experience with SQL and NoSQL databases such as MongoDB, MSSQL, or Postgres. Experience with testing tools such as PyTest, PyMock, xUnit, mocking frameworks, etc. Experience with GCP technologies such as VertexAI, BigQuery, GKE, GCS, DataFlow, and Kubeflow. Experience with Docker and Kubernetes. Experience with Java and Scala a plus. Where we're going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it's our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! Disability Accommodation UKGCareers@ukg.com

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Pune

Work from Office

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we're only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that you're more than your work. That's why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose "” a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you're passionate about our purpose "” people "”then we can't wait to support whatever gives you purpose. We're united by purpose, inspired by you. UKG is a leader in the HCM space, and is at the forefront of artificial intelligence innovation, dedicated to developing cutting-edge generative AI solutions that transform the HR / HCM industry and enhance user experiences. We are seeking a talented and motivated AI Engineers to join our dynamic team and contribute to the development of next-generation AI/GenAI based products and solutions. This role will provide you with the opportunity to work on cutting-edge SaaS technologies and impactful projects that are used by enterprises and users worldwide. As a Lead Software Engineer, you will be involved in the design, development, testing, deployment, and maintenance of software solutions. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services. Responsibilities: Software DevelopmentWrite clean, maintainable, and efficient code or various software applications and systems. GenAI Product DevelopmentParticipate in the entire AI development lifecycle, including data collection, preprocessing, model training, evaluation, and deployment.Assist in researching and experimenting with state-of-the-art generative AI techniques to improve model performance and capabilities. Design and ArchitectureParticipate in design reviews with peers and stakeholders Code ReviewReview code developed by other developers, providing feedback adhering to industry standard best practices like coding guidelines TestingBuild testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and TroubleshootingTriage defects or customer reported issues, debug and resolve in a timely and efficient manner. Service Health and QualityContribute to health and quality of services and incidents, promptly identifying and escalating issues. Collaborate with the team in utilizing service health indicators and telemetry for action. Assist in conducting root cause analysis and implementing measures to prevent future recurrences. Dev Ops ModelUnderstanding of working in a DevOps Model. Begin to take ownership of working with product management on requirements to design, develop, test, deploy and maintain the software in production. DocumentationProperly document new features, enhancements or fixes to the product, and also contribute to training materials. Basic Qualifications: Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 4+ years of professional software development experience. Proficiency as a developer using Python, FastAPI, PyTest, Celery and other Python frameworks. Experience with software development practices and design patterns. Familiarity with version control systems like Git GitHub and bug/work tracking systems like JIRA. Basic understanding of cloud technologies and DevOps principles. Strong analytical and problem-solving skills, with a proven track record of building and shipping successful software products and services. Preferred Qualifications: Experience with object-oriented programming, concurrency, design patterns, and REST APIs. Experience with CI/CD tooling such as Terraform and GitHub Actions. High level familiarity with AI/ML, GenAI, and MLOps concepts. Familiarity with frameworks like LangChain and LangGraph. Experience with SQL and NoSQL databases such as MongoDB, MSSQL, or Postgres. Experience with testing tools such as PyTest, PyMock, xUnit, mocking frameworks, etc. Experience with GCP technologies such as VertexAI, BigQuery, GKE, GCS, DataFlow, and Kubeflow. Experience with Docker and Kubernetes. Experience with Java and Scala a plus. Where we're going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it's our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! Disability Accommodation UKGCareers@ukg.com

Posted 1 month ago

Apply

3.0 - 8.0 years

11 - 15 Lacs

Chennai

Work from Office

The System Integration Specialist is responsible for the integration and customisation of Nokia solutions and 3rd party platforms to meet customer requirements. Creates, deploys, and integrates; enables rating, charging and provisioning of new services based on the operator's processes on service integration and business management systems. You have: 3+ years of relevant industry experience Proficiency in Python, JavaScript, JAVA, and FTL Strong Linux environment knowledge Familiarity with CI/CD, Agile, and SCRUM methodologies Expertise in cloud technologies (AWS, OCP, GKE, RedHat, Azure, Rancher) Solid understanding of Kubernetes and Docker/containerd GPON Domain Knowledge and familiarity with NOKIA FN products like SDAN and Home Wi-Fi It would be nice if you also have: Strong Knowledge of Network Management Systems and Element Management Systems Develops implementation plans, technical infrastructure documents, and test strategies, including detailed test cases for project execution. Works across multiple technology domains with intermediate to advanced expertise, or specializes deeply in a single area. Applies Systems Integration (SI) delivery processes effectively and contributes to end-to-end requirement analysis and feasibility assessments. Supports customer requirement gathering and feature specification, ensuring technical alignment and solution viability. Contributes to migration planning and execution, while documenting learnings and best practices in platforms like SharePoint, ShareNet, and internal forums. Operates independently, leveraging strong business knowledge and best practices to drive service and product improvements. Uses advanced analytical and problem-solving skills to address complex challenges and introduce innovative solutions. Provides professional guidance, mentoring, and may lead small-scale projects or teams, managing resources, task allocation, and daily operations.

Posted 1 month ago

Apply

6.0 - 9.0 years

15 - 19 Lacs

Bengaluru

Work from Office

Project description Developing cloud-based compliance state-of-the-art archive products to archive and retain real-time communications data in line with internal and external regulatory requirements. The product is developed in-house in an agile development setup based on business requirements from multiple stakeholder parties. By employing continuous development and deployment principles, the team is aiming to transition from project to product management to support the bank with robust compliance archive solutions for the next decade. For a large global investment bank, we are looking for GCP-qualified cloud engineers to help with the FIC (fixed income and currencies) cloud migrations under Project Cirrus. Knowledge of Financial Services/FIC would be great, but the primary skills we need are in building, migrating, and deploying applications to GCP, Terraform module coding, Google infrastructure, cloud-native services such as GCE, GKE, CloudSQL/Postgres, logging and monitoring, etc., & good written and spoken English, as we would like these engineers to help with knowledge transfer to our existing development & support teams. We would like to place people alongside the engineers they'll be working with in the bank. You should have extensive experience with Google Cloud Platform (GCP), Kubernetes, and Docker. role involves working closely with our development and operations teams to ensure seamless integration and deployment of applications. Responsibilities Design, implement, and manage CI/CD pipelines on GCP. Automate infrastructure provisioning, configuration, and deployment using tools like Terraform and Ansible. Manage and optimize Kubernetes clusters for high availability and scalability. Containerize applications using Docker and manage container orchestration. Monitor system performance, troubleshoot issues, and ensure system reliability and security. Collaborate with development teams to ensure smooth and reliable operation of software and systems. Implement and manage logging, monitoring, and alerting solutions. Stay updated with the latest industry trends and best practices in DevOps and cloud technologies. Skills Must have Looking for 6 to 9 years of experience as a DevOps Engineer and a minimum of 4 years of relevant experience in GCP. Bachelor's degree in Computer Science, Engineering, or a related field. Strong expertise in Kubernetes and Docker. Experience with infrastructure as code (IaC) tools such as Terraform and Ansible. Proficiency in scripting languages like Python, Bash, or Go. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Knowledge of networking, security, and database management. Excellent problem-solving skills and attention to detail. Nice to have Strong communication and collaboration skills. Other Languages EnglishC2 Proficient Seniority Senior

Posted 1 month ago

Apply

4.0 - 9.0 years

15 - 19 Lacs

Pune

Work from Office

: Job Title: Technical-Specialist GCP Developer LocationPune, India Role Description This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Spark and GCP technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What well offer you . 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Google Cloud platform for at least 4 years. Hands own experience in Bigquery, Dataproc, Composer, Terraform, GKE, Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of Devops. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platformsOpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Knowledge on working with APIs and microservices , integrating external and internal web services including SOAP, XML, REST, JSON . Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. How well support you . . . .

Posted 1 month ago

Apply

3.0 - 7.0 years

8 - 13 Lacs

Pune

Work from Office

: J ob Title Senior Full Stack Engineer Corporate TitleAssistant Vice President LocationPune, India Role Description Enterprise SRE Team in CB is responsible for making Production Better by boosting Observability and strengthening reliability across Corporate Banking. The team actively works on building common platforms, reference architectures, tools for production engineering teams to standardize processes across CB. We work in agile environment with focus on Customer centricity and outstanding user experience with high reliability and flexibility of technical solutions in mind. With our platform we want to be an enabler for highest quality cloud-based software solutions and processes at Deustche Bank. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities What Youll Do Work on the SLO Dashboard, an application owned by the CB SRE team, ensuring its design (a highly scalable & performant solution), development, and maintenance. Participate in requirement workshops, analyze requirements, perform technical design, and take ownership of the development process. Identify and implement appropriate tools to support engineering automation, including test automation and CI/CD pipelines. Understand technical needs, prioritize requirements, and manage technical debt based on stakeholder urgency. Collaborate with the UI/UX designer while being mindful of backend changes and their impact on architecture or endpoint modifications during discussions. Produce detailed design documents and guide junior developers to align with the priorities and deliverables of the SLO Dashboard. Your skills and experience Several years relevant experiences in software architecture, design, development, and engineering, ideally in banking/finance services industry Strong engineering, solution and domain architecture background and up to date knowledge on software engineering topics such as microservices, streaming architectures, high-performance, horizontal scaling, API design, GraphQL, REST services, database systems, UI frameworks, Distributed Caching (e.g. Apache Ignite, HazelCast, Redis etc.), enterprise integration patterns, modern SDLC practices Good experience in working in GCP (Cloud based technologies) using GKE, CloudSQL (Postgres), Cloudrun, terraform. Good experience in DevOps using GitHub Actions for build, Liquibase pipelines. Fluent in application development stack such as Java/Spring-Boot (3.0+), ReactJS, Python, JavaScript/TypeScript/NodeJS, SQL Postgres DB. Ability to work in a fast-paced environment with competing and alternating priorities with a constant focus on delivery with strong interpersonal skills to manage relationships with a variety of partners and stakeholders; as well as facilitate group sessions AI Integration and Implementation (Nice to have): Leverage AI tools like GitHub Copilot, Google Gemini, Llama and other language models to optimize engineering analytics and workflows. Design and implement AI-driven dashboards and reporting tools for stakeholders Apply AI tools to automate repetitive tasks, analyze complex engineering datasets, and derive trends and patterns. How well support you

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Pune

Work from Office

: Job Title GCP Data Engineer, AS LocationPune, India Role Description An Engineer is responsible for designing and developing entire engineering solutions to accomplish business goals. Key responsibilities of this role include ensuring that solutions are well architected, with maintainability and ease of testing built in from the outset, and that they can be integrated successfully into the end-to-end business process flow. They will have gained significant experience through multiple implementations and have begun to develop both depth and breadth in several engineering competencies. They have extensive knowledge of design and architectural patterns. They will provide engineering thought leadership within their teams and will play a role in mentoring and coaching of less experienced engineers. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design, develop and maintain data pipelines using Python and SQL programming language on GCP. Experience in Agile methodologies, ETL, ELT, Data movement and Data processing skills. Work with Cloud Composer to manage and process batch data jobs efficiently. Develop and optimize complex SQL queries for data analysis, extraction, and transformation. Develop and deploy google cloud services using Terraform. Implement CI CD pipeline using GitHub Action Consume and Hosting REST API using Python. Monitor and troubleshoot data pipelines, resolving any issues in a timely manner. Ensure team collaboration using Jira, Confluence, and other tools. Ability to quickly learn new any existing technologies Strong problem-solving skills. Write advanced SQL and Python scripts. Certification on Professional Google Cloud Data engineer will be an added advantage. Your skills and experience 6+ years of IT experience, as a hands-on technologist. Proficient in Python for data engineering. Proficient in SQL. Hands on experience on GCP Cloud Composer, Data Flow, Big Query, Cloud Function, Cloud Run and well to have GKE Hands on experience in REST API hosting and consumptions. Proficient in Terraform/ Hashicorp. Experienced in GitHub and Git Actions Experienced in CI-CD Experience in automating ETL testing using python and SQL. Good to have APIGEE. Good to have Bit Bucket How well support you . . .

Posted 1 month ago

Apply

7.0 - 9.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Strong hands on experience in GCP Relevant experience in Terraforms Should have experience in Kubernetes and Devops. Total experience - 7 - 9 years Location - Flexible. Rates - 170 K/M - 180 K/M Design, implement, and maintain CI/CD pipelines using Jenkins, Codefresh, or GitHub Actions. Collaborate with cross-functional teams to identify the areas for process improvement and implement changes using Agile Methodologies. Troubleshoot complex issues related to infrastructure deployment and management on GCP and AWS Platforms. Configure and manage networking components VPCs, subnets, firewalls, load balancers, DNS, and VPNs. Manage IAM roles and policies in AWS and GCP to ensure secure and appropriate access to cloud resources. Handle user, group, and service account permissions, including provisioning, auditing, and troubleshooting access-related issues. Develop and maintain Terraform-based infrastructure and reusable modules in Cloud Environment. Develop and maintain automation scripts using Bash, Python or other relevant scirpint languages. Create and maintain document Standard operating procedures (SOP's) and support guides. Required Skills : Strong understanding of Subnetting, VPCs, Firewalls and network fundamentals. Hands-on experience with at least one major provider(AWS, GCP) Proficiency in infrastructure as code (IAC) tools (Terraform Preffered) Experience with Git-based workflows, Version control and Pull Requests Familiarity with the CI/CD Pipelines and tools (Codefresh Preffered). Experince in any scripting language Bash, Python, or Groovy. Experience with containerization (Docker) and orchestration (Kubernetes - EKS/GKE) Experience with collaboration and ITSM tools like Jira, ServiceNow, and Confluence.Strong troubleshooting and communication skills. IAM ManagementAbility to manage IAM roles, policies, and permissions for users, groups, and service accounts in AWS and GCP. This includes troubleshooting access issues and enforcing least-privilege principles. Education: UGB.Tech/BE, BCA, or B.Sc in any specialization. PGAny postgraduate degree is acceptable. Experience: 38 years of relevant experience in DevOps, cloud infrastructure, and automation.

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Pune

Work from Office

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What you ll do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need Bachelors degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Primary Location: IND-Pune-Equifax Analytics-PEC Function: Function - Tech Dev and Client Services Schedule: Full time

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 10 Lacs

Thiruvananthapuram

Work from Office

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What you ll do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need Bachelors degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Primary Location: IND-Trivandrum-Equifax Analytics-PEC Function: Function - Tech Dev and Client Services Schedule: Full time

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 10 Lacs

Thiruvananthapuram

Work from Office

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What you ll do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need Bachelors degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS, Rest API 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Primary Location: IND-Trivandrum-Equifax Analytics-PEC Function: Function - Tech Dev and Client Services Schedule: Full time

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 10 Lacs

Thiruvananthapuram

Work from Office

What you ll do Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need Bachelors degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Cloud Certification strongly preferred Primary Location: IND-Trivandrum-Equifax Analytics-PEC Function: Function - Tech Dev and Client Services Schedule: Full time

Posted 1 month ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute ServicesMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education:Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities:Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations.Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes.Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and PrometheusProven track record in supporting and deploying various public cloud services.Experience in building or managing self-service platforms to boost developer productivity.Proficiency in using Infrastructure as Code (IaC) tools like Terraform.Skilled in diagnosing and resolving complex issues in automation and cloud environments.Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems.Strong understanding of infrastructure CI/CD pipelines and associated tools.Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions.Experience working in GKE, Edge/GDCE environments.Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset:Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions.At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies.Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules.Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE).Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications:GCP ACE certification is mandatory.CKA certification is highly desirable.HashiCorp Terraform certification is a significant plus. Qualification 15 years full time education

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Noida

Work from Office

About Aeris:. The Internet of Things (IoT) will unlock trillions of dollars in value over the next 10 years as 50 billion devices are brought online. Aeris is at the forefront of this industry, building networks and applications to enable Fortune 500 clients like Chrysler, Honda and Bosch fundamentally improve their businesses. Headquartered in Silicon Valley with offices in Bucharest, Chicago, London, Delhi, Bangalore, Helsinki, and Tokyo as well as other markets. We rank among the top ten cellular providers for the IoT globally, powering critical projects across energy, transportation, retail, healthcare and more, Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast-pace, A few things to know about us:. We put our customers first. When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last, We do things differently. As a pioneer in a highly-competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way, We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage, Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company, We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative, Aeris is looking for DevOps engineers who are eager to learn new skills and help develop services using the latest technology. You should be passionate about building scalable and highly available infrastructure and services. You will work closely with Aeris development teams to deliver a leading connected vehicle platform to automotive OEMs, Job Description. Develop automation tools and framework for CI/CD. Develop critical infrastructure components or systems and follow them through to production. Build and support public cloud based SaaS and PaaS services. Identify, build and improve tooling, processes, security and infrastructure that support Aeris cloud environments. Identify, design, and develop automation solutions to create, manage and improve cloud infrastructure, builds and deployments, Lead from Proof of Concept to implementation for critical infrastructure components and new DevSecOps tools and solutions, Represent DevOps in design reviews and work cross-functionally with Engineering and Operation teams for operational readiness. Dive deep to resolve problems at their root, looking for failure patterns and driving resolution. Qualifications and Experience. A Bachelor's degree in Engineering, around 10+ years of professional technology experience. Experience deploying and running enterprise grade public cloud infrastructure, preferably with GCP. Hands-on Automation with Terraform, Groovy and experience with CI-CD, Hands-on experience in Linux/Unix environment and scripting languages: (eg Shell, Perl, Python, Javascript, Golang etc), Hands-on experience in two or more of the following areas. Databases (NoSQL/ SQL): Hadoop, Cassandra, MySQL. Messaging system configuration and maintenance (Kafka+Zookeeper, MQTT, RabbitMQ). WAF, CloudArmor, NGINX. Apache/Tomcat/JBoss based web applications and services (REST). Observability stacks (eg ELK, Grafana Labs). Hands-on experience with Kubernetes (GKE, AKS). Hands-on experience with Jenkins. GitOps experience is a plus. Experience working with large Enterprise grade SAAS products, Proven capability for critical thinking, problem solving and the patience to see hard problems through to the end. Qualities. Passionate about building highly available and reliable public cloud infrastructure. Take ownership, make commitments, and deliver on your commitments. Good communicator. Team player who collaborates across different teams including DevSecOps, software development, and security, Continuous improvement mindset. What is in it for you?. You get to build the next leading edge connected vehicle platform. The ability to collaborate with our highly skilled groups who work with cutting edge technologies. High visibility as you support the systems that drive our public facing services. Career growth opportunities. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer, Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies