Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
andhra pradesh
On-site
You are a talented Full Stack Developer with a solid background in Laravel, AWS, and DevOps. Your role involves designing, developing, deploying, and maintaining cutting-edge web applications with a focus on performance, scalability, and reliability. You will work on Laravel development, AWS management, and DevOps tasks to ensure seamless CI/CD operations. In Laravel Development, you will design, develop, and maintain web applications using Laravel, optimize applications for speed and scalability, integrate back-end services, troubleshoot and debug existing applications, and collaborate with front-end developers for seamless integration. For AWS Management, you will manage and deploy web applications on AWS infrastructure, utilize various AWS services, implement backup, recovery, and security policies, optimize services for cost and performance, and have experience with Infrastructure as Code tools like AWS CloudFormation or Terraform. In DevOps, your responsibilities include designing and implementing CI/CD pipelines, maintaining infrastructure automation, monitoring server and application performance, developing configuration management tools, implementing logging and monitoring solutions, and collaborating with development and QA teams for code deployments and releases. Requirements for this role include a Bachelor's degree in Computer Science or related field, 4+ years of Laravel experience, 2+ years of AWS experience, 4+ years of DevOps experience, proficiency in version control systems, strong knowledge of database systems, experience with containerization tools, familiarity with agile methodologies, problem-solving skills, detail orientation, and ability to work in a fast-paced environment. Preferred qualifications include AWS certifications, experience with serverless architecture and microservices, knowledge of front-end technologies, familiarity with monitoring tools, and understanding of security best practices. Soft skills such as communication, collaboration, independence, teamwork, analytical skills, and problem-solving abilities are also essential. This position offers a competitive salary, opportunities to work with the latest technologies, professional development opportunities, health insurance, paid time off, and other benefits in a collaborative and innovative work culture.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Java Microservices Lead with 4+ years of experience and an immediate joiner located in Pune, you will play a crucial role in the end-to-end architecture, development, and deployment of enterprise Java microservices-based applications. Your primary responsibilities will include collaborating with cross-functional teams to architect, design, and develop solutions using core Java, Spring Boot, Spring Cloud, and AWS API Gateway. Moreover, you will lead and mentor a team of developers, participate in the entire software development life cycle, and drive adoption of microservices patterns and API design. Your expertise in Java, Spring Boot, AWS API Gateway, and microservices architecture will be essential in ensuring the delivery of high-quality code following best practices and coding standards. Your hands-on experience with containerization technologies like Docker, orchestration platforms such as Kubernetes, and deployment on cloud services like AWS, Azure, or Google Cloud will be highly valuable. Additionally, your familiarity with relational and NoSQL databases, Agile methodologies, version control systems, and software engineering best practices will contribute to the success of the projects. Furthermore, your strong problem-solving and analytical skills, attention to detail, and ability to work both independently and collaboratively in a fast-paced environment will be key assets in troubleshooting, debugging, and resolving issues across distributed systems. Your excellent communication and interpersonal skills will foster a culture of collaboration, continuous improvement, and technical excellence within the team. Staying up to date with industry trends and introducing innovative solutions to improve application development will be encouraged. In summary, as a Java Microservices Lead, you will be at the forefront of designing and developing scalable, cloud-native solutions, optimizing application performance and scalability, and establishing CI/CD pipelines. Your technical skills in Java, Spring Boot, microservices architecture, cloud platforms, databases, CI/CD, DevOps, and monitoring tools will be crucial in ensuring the success of the projects and the team.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
Qualcomm India Private Limited is looking for a highly skilled and experienced MLOps Engineer to join their team and contribute to the development and maintenance of their ML platform both on premises and AWS Cloud. As a MLOps Engineer, your responsibility will include architecting, deploying, and optimizing the ML & Data platform supporting the training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial for ensuring the smooth operation and scalability of the ML infrastructure. You will collaborate with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of the ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Your responsibilities will include architecting, developing, and maintaining the ML platform, designing and implementing scalable infrastructure solutions for NVIDIA clusters on premises and AWS Cloud, collaborating with data scientists and software engineers to define requirements, optimizing platform performance and scalability, monitoring system performance, implementing CI/CD pipelines, maintaining monitoring stack using Prometheus and Grafana, managing AWS services, implementing logging and monitoring solutions, staying updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proposing enhancements to the ML platform. Qualcomm is looking for candidates with a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, proven experience as an MLOps Engineer or similar role with a focus on large-scale ML and/or Data infrastructure and GPU clusters, strong expertise in configuring and optimizing NVIDIA DGX clusters, proficiency in using the Kubernetes platform and related technologies, solid programming skills in languages like Python, Go, experience with relevant ML frameworks, in-depth understanding of distributed computing and GPU acceleration techniques, familiarity with containerization technologies and orchestration tools, experience with CI/CD pipelines and automation tools for ML workflows, experience with AWS services and monitoring tools, strong problem-solving skills, excellent communication, and collaboration skills. Qualcomm is an equal opportunity employer and is committed to providing reasonable accommodations to support individuals with disabilities during the hiring process. If you are interested in this role or require more information, please contact Qualcomm Careers.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
The role displayed in this job posting revolves around supporting and addressing stakeholder needs within tight timelines, particularly focusing on modernizing and migrating to OCI. The Operations team is crucial in reducing engineering overhead by managing incoming issues, allowing engineers to concentrate on roadmap priorities. Handling over 2,000 tickets quarterly, the team ensures a seamless experience for Dev Tools consumers and prevents overwhelming engineering teams with ticket overload. With team members shifting to support key CP initiatives and recent departures, there is increased pressure on the team. To uphold service quality and achieve FY2026 goals, the addition of a full-time, permanent team member is recommended. This hire is vital to provide timely support and ensure the team can efficiently meet stakeholder demands. Key responsibilities for this position include designing, building, and maintaining scalable infrastructure, developing automation tools for operational efficiency, monitoring system performance, collaborating with other teams for process improvement, participating in on-call rotations, and ensuring system security and compliance. The ideal candidate should hold a Bachelor's degree in Computer Science, Engineering, or a related field, possess proficiency in various development tools, have hands-on experience with cloud platforms and container orchestration tools, understand networking and Linux administration, be familiar with CI/CD pipelines and monitoring tools, and exhibit strong problem-solving skills. As a world leader in cloud solutions, Oracle is committed to leveraging technology to address current challenges and foster innovation through an inclusive and diverse workforce. The company offers competitive benefits, flexible medical and retirement options, and encourages community involvement through volunteer programs. If you require accessibility assistance or accommodation for a disability during the employment process, please reach out via email at accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
We are looking for an experienced DevOps Architect to spearhead the design, implementation, and management of scalable, secure, and highly available infrastructure. As the ideal candidate, you should possess in-depth expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across various cloud environments. This role requires strong leadership skills and the ability to mentor team members effectively. Your responsibilities will include leading and overseeing the DevOps team to ensure the reliability of infrastructure and automated deployment processes. You will be tasked with designing, implementing, and maintaining highly available, scalable, and secure cloud infrastructure on platforms such as AWS, Azure, and GCP. Developing and optimizing CI/CD pipelines for multiple applications and environments will be a key focus, along with driving Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Monitoring, logging, and alerting solutions will fall under your purview to ensure system health and performance. Collaboration with Development, QA, and Security teams to integrate DevOps best practices throughout the SDLC is essential. You will also lead incident management and root cause analysis for production issues, ensuring robust security practices for infrastructure and pipelines. Guiding and mentoring team members to foster a culture of continuous improvement and technical excellence will be crucial. Additionally, evaluating and recommending new tools, technologies, and processes to enhance operational efficiency will be part of your role. Qualifications: - Bachelor's degree in Computer Science, IT, or a related field; Master's degree preferred. - At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA). - 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. - 5+ years of experience in a technical leadership or team lead role. Skills & Abilities: - Expertise in at least two major cloud platforms: AWS, Azure, or GCP. - Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. - Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. - Proficiency in containerization and orchestration using Docker and Kubernetes. - Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). - Scripting knowledge in languages like Python, Bash, or Go. - Solid understanding of networking, security, and system administration. - Experience in implementing security best practices across DevOps pipelines. - Proven ability to mentor, coach, and lead technical teams. Conditions: Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required. Values: Our values at AOT guide how we work, collaborate, and grow as a team. Every role is expected to embody and promote values such as innovation, integrity, ownership, agility, collaboration, and empowerment.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You have 5+ years of overall experience in Cloud Operations, with a minimum of 5 years of hands-on experience with Google Cloud Platform (GCP) and at least 3 years of experience in Kubernetes administration. It is mandatory to have GCP Certified Professional certification. In this role, you will be responsible for managing and monitoring GCP infrastructure resources to ensure optimal performance, availability, and security. You will also administer Kubernetes clusters, handle deployment, scaling, upgrades, patching, and troubleshooting. Automation for provisioning, scaling, and monitoring using tools like Terraform, Helm, or similar will be implemented and maintained by you. Your key responsibilities will include responding to incidents, performing root cause analysis, and resolving issues within SLAs. Configuring logging, monitoring, and alerting solutions across GCP and Kubernetes environments will also be part of your duties. Supporting CI/CD pipelines, integrating Kubernetes deployments with DevOps processes, and maintaining detailed documentation of processes, configurations, and runbooks are critical aspects of this role. Collaboration with Development, Security, and Architecture teams to ensure compliance and best practices is essential. You will participate in an on-call rotation and respond promptly to critical alerts. The required skills and qualifications for this position include being a GCP Certified Professional (Cloud Architect, Cloud Engineer, or equivalent) with a strong working knowledge of GCP services such as Compute Engine, GKE, Cloud Storage, IAM, VPC, and Cloud Monitoring. Solid experience in Kubernetes cluster administration, proficiency with Infrastructure as Code tools like Terraform, knowledge of containerization concepts and tools like Docker, experience in monitoring and observability with tools like Prometheus, Grafana, and Stackdriver, familiarity with incident management and ITIL processes, ability to work in 24x7 operations with rotating shifts, and strong troubleshooting and problem-solving skills. Preferred skills that would be nice to have for this role include experience supporting multi-cloud environments, scripting skills in Python, Bash, Go, exposure to other cloud platforms like AWS and Azure, and familiarity with security controls and compliance frameworks.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
You will be taking on the role of Release Manager in a Hybrid position based in Pune. As a Release Manager, your primary responsibilities will revolve around planning and scheduling releases. This includes developing and managing release schedules, automating deployments, and implementing release monitoring processes. Another key aspect of your role will involve managing application infrastructure. You will be responsible for planning and setting up infrastructure on various platforms such as On-Prem, Azure, or AWS. Additionally, you will monitor application infrastructure using tools like Grafana, Dynatrace, or similar tools specifically for technologies like OpenShift, Kubernetes, and Containers. Deployment and support tasks will also fall within your purview. You will oversee and validate deployments, as well as address any post-release issues and coordinate hotfixes as needed. Furthermore, you will play a crucial role in risk and quality management to ensure smooth and efficient releases. To excel in this role, you should possess at least 6+ years of relevant experience. Proficiency in Azure DevOps and CI/CD pipelines is essential, along with expertise in Docker, Kubernetes, and OpenShift technologies. Familiarity with monitoring tools such as Dynatrace and Grafana will also be advantageous. If you have a proactive mindset, strong attention to detail, and a passion for ensuring high-quality releases, we encourage you to apply for this position.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
Join a dynamic team shaping the tech backbone of our operations, where your expertise fuels seamless system functionality and innovation. As a Technology Support II team member in Consumer and Community Banking, you will play a vital role in ensuring the operational stability, availability, and performance of our production application flows. Your efforts in troubleshooting, maintaining, identifying, escalating, and resolving production service interruptions for all internally and externally developed systems support a seamless user experience and a culture of continuous improvement. You will be responsible for analyzing and troubleshooting production application flows to ensure end-to-end application or infrastructure service delivery supporting the business operations of the firm. Your role will involve improving operational stability and availability through participation in problem management, monitoring production environments for anomalies, and addressing issues utilizing standard observability tools. Additionally, you will assist in the escalation and communication of issues and solutions to the business and technology stakeholders, identify trends, and help in managing incidents, problems, and changes in support of full stack technology systems, applications, or infrastructure. The CC Rewards batch involves Loyalty Processing Engine (LPE) batches that comprise an overnight ControlM driven batch responsible for loading data related to loyalty processing and reporting for Chase Card Services. The Chase Loyalty Services batch team primarily supports Loyalty Rewards Redemption batch and collaborates with other application teams to support JPMC's Rewards Ecosystem. Furthermore, the CC Rewards includes Relationship Management System (RelMS) that comprises an overnight ControlM driven batch managing the transmission and tracking of partner information to and from partners. It involves sending and receiving files, allowing centralized transmission, tracking, and reconciliation of file exchanges between Chase and their partners with File Level and Record Level Reconciliation. Required qualifications, capabilities, and skills: - 2+ years of experience or equivalent expertise troubleshooting, resolving, and maintaining information technology services - Knowledge of applications or infrastructure in a large-scale technology environment on premises or public cloud - Exposure to observability and monitoring tools and techniques - Familiarity with processes in scope of the Information Technology Infrastructure Library (ITIL) framework - Proficiency in one or more general-purpose programming languages: UNIX/Perl/Python scripting - Experience with CTRL-M/Autosys/CA Unicenter scheduler - Strong monitoring, analytical and escalation skills, strong debugging and troubleshooting skills with hands-on experience on monitoring tools such as Splunk, Geneos, Grafana, and Dynatrace - Understanding of Cloud Architecture AWS Solution Architect/ DevOps Architect - Expertise in troubleshooting and resolving database performance bottlenecks - Strong Sybase/Oracle PL/SQL coding/debugging skills - Experience in Automation tools - Working proficiency in production management triaging and analyzing tools Preferred qualifications, capabilities, and skills: - Knowledge of one or more general-purpose programming languages or automation scripting,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
andhra pradesh
On-site
The role of Technical Architect for IoT Platform requires a highly skilled individual with over 10 years of experience, who possesses expertise in Java Spring Boot, React.js, IoT system architecture, and a strong foundation in DevOps practices. As a Technical Architect, you will be responsible for designing scalable, secure, and high-performance IoT solutions. Your role will involve leading full-stack teams and collaborating with product, infrastructure, and data teams to ensure the successful implementation of IoT projects. Your key responsibilities will include architecture and design tasks such as implementing scalable and secure IoT platform architecture, defining and maintaining architecture blueprints and technical documentation, leading technical decision-making, and ensuring adherence to best practices and coding standards. You will also be involved in architecting microservices-based solutions using Spring Boot, integrating them with React-based front-ends, defining data flow, event processing pipelines, and device communication protocols. In terms of IoT domain expertise, you will be required to architect solutions for real-time sensor data ingestion, processing, and storage, work closely with hardware and firmware teams for device-cloud communication, support multi-tenant, multi-protocol device integration, and guide the design of edge computing, telemetry, alerting, and digital twin models. Your role will also involve DevOps and infrastructure-related tasks such as defining CI/CD pipelines, managing containerization & orchestration, driving infrastructure automation, ensuring platform monitoring, logging, and observability, and enabling auto-scaling, load balancing, and zero-downtime deployments. As a Technical Architect, you will be expected to demonstrate leadership qualities by collaborating with product managers and business stakeholders, mentoring and leading a team of developers and engineers, conducting code and architecture reviews, setting goals and targets, organizing features and sprints, and providing coaching and professional development to team members. Your technical skills and experience should include proficiency in backend technologies such as Java 11+/17, Spring Boot, Spring Cloud, REST APIs, JPA/Hibernate, PostgreSQL, as well as frontend technologies like React.js, Redux, TypeScript, and Material-UI. Additionally, experience with messaging/streaming platforms, databases, DevOps tools, monitoring tools, cloud platforms, and other relevant technologies is required. Other must-have qualifications for this role include hands-on IoT project experience, designing and deploying multi-tenant SaaS platforms, knowledge of security best practices in IoT and cloud environments, as well as excellent problem-solving, communication, and team leadership skills. It would be beneficial if you have experience with Edge Computing frameworks, AI/ML model integration, industrial protocols, digital twin concepts, and relevant certifications in AWS/GCP, Kubernetes, or Spring. A Bachelor's or Master's degree in Computer Science, Engineering, or a related field is also required. By joining us, you will have the opportunity to lead architecture for cutting-edge industrial IoT platforms, work with a passionate team in a fast-paced and innovative environment, and gain exposure to cross-disciplinary challenges in IoT, AI, and cloud-native technologies.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You will be working on a low-latency trading middleware for Solana, focusing on a high-performance transaction broadcasting engine used by top trading firms and validators. The current system handles 2M+ transactions per day, but only 0.01% of these transactions are successful, presenting a challenge that you will be tasked with solving. Your main mission will be to understand and optimize the transaction pipeline. You will investigate why transactions fail or are slow to process, identify patterns that contribute to success or failure based on geographies or clients, and explore ways to enhance the speed of our infrastructure down to the microsecond level. You will be working with various technologies including ClickHouse for raw request logs, Postgres for transaction outcomes and metadata, Grafana and server logs for validator and router performance, and our geo-distributed validator and router stack. Your responsibilities will include analyzing and enhancing user flow landing rates, optimizing performance across endpoints, network paths, and validators, reducing infrastructure costs by improving orchestration and utilization, reverse-engineering competitors and creating benchmarks, and suggesting innovative strategies to monetize traffic, bundles, and client insights.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You will be joining Introlligent, a client-centric global software development company that specializes in delivering top-tier software development, web applications, IT professional services, database management, business intelligence, and infrastructure solutions. With a significant presence across North America, Singapore, UAE, China, and India, Introlligent offers unmatched scalability, project efficiencies, and cost benefits through its proven Global Delivery Model. Established in 2001, Introlligent has earned a strong reputation for excellence in providing offshore software development services across various industries, including manufacturing, media & entertainment, banking & finance, healthcare, insurance, and real estate. Introlligent's client base includes Fortune 500 companies, and the company specializes in solutions such as offshore and onsite software development, web application development, product development, recruitment solutions, web-enablement design, .NET development, PHP development, and more. Your role at Introlligent will involve the following key responsibilities: DevOps skills: - Managing Linux-based systems, including installation, configuration, and maintenance. - Monitoring, troubleshooting, and resolving system issues. - Implementing updates, patches, and configuration changes. - Coordinating cross-functional teams to solve system issues. - Containerizing applications using Docker and managing their lifecycle. - Setting up monitoring and observability tools like Prometheus and Grafana. - Troubleshooting and resolving issues related to application infrastructure. - Having basic Python coding experience. - Knowing Ansible automation and configuration tools. - Being familiar with various cloud platforms (AWS, Google Cloud). Networking: - Having a good understanding of IP addressing, subnetting, DNS, DHCP, routing, switching, and troubleshooting common network connectivity issues. - Using network troubleshooting techniques and creative problem-solving. - Having expertise in LAN, WAN, MPLS, and Internet technologies. - Experience in a data center environment is preferred. - Exposure to Nginx. - Setting up and troubleshooting site-to-site and client VPNs for secure remote access. - Documenting network configurations, changes, and procedures. - Possessing excellent verbal and written communication skills. If you have 2-5 years of relevant experience and are looking for a Full-Time opportunity in Bengaluru with a Hybrid work model, then this position at Introlligent could be the perfect fit for you. Join us in our commitment to delivering excellence in software development and IT solutions to our global clients.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Lead Engineer, DevOps at Toyota Connected India, you will have the opportunity to work in a collaborative and fast-paced environment focused on creating infotainment solutions on embedded and cloud platforms. You will be part of a team that values continual improvement, innovation, and delivering exceptional value to customers. Your role will involve being hands-on with cloud platforms like AWS and Google Cloud Platform, utilizing containerization and Kubernetes for container orchestration, and working with infrastructure automation and configuration management tools such as Terraform, CloudFormation, and Ansible. You will be expected to have a strong proficiency in scripting languages like Python, Bash, or Go, experience with monitoring and logging solutions including Prometheus, Grafana, ELK Stack, or Datadog, and knowledge of networking concepts, security best practices, and infrastructure monitoring. Additionally, your responsibilities will include working with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar. At Toyota Connected, you will enjoy top-of-the-line compensation, autonomy in managing your time and workload, yearly gym membership reimbursement, free catered lunches, and a flexible dress code policy. You will have the opportunity to contribute to the development of products that enhance the safety and convenience of millions of customers. Moreover, you will be working in a new cool office space and enjoying other awesome benefits. Toyota Connected's core values are EPIC - Empathetic, Passionate, Innovative, and Collaborative. You will be encouraged to make decisions empathetically, strive to build something great, experiment with innovative ideas, and work collaboratively with your teammates to achieve success. Join us at Toyota Connected to be part of a team that is reimagining mobility for today and the future!,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Engineer, IT Data at American Airlines, you will be part of a diverse and high-performing team dedicated to technical excellence. Your main focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you will be working in refers to the area within Information Technology that focuses on managing and leveraging data as a strategic asset. This includes data management, storage, integration, and governance, leaning into Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will work closely with source data application teams and product owners to design, implement, and support analytics solutions that provide insights to make better decisions. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc., as well as traditional data warehouse tools. Your responsibilities will involve multiple aspects of the development lifecycle including design, cloud engineering, ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, and production support. You will provide technical leadership, collaborate within a team environment, and work independently. Additionally, you will be part of a DevOps team that completely owns and supports the product, implementing batch and streaming data pipelines using cloud technologies. As an essential member of the team, you will lead the development of coding standards, best practices, privacy, and security guidelines. You will also mentor others on technical and domain skills to create multi-functional teams. Your success in this role will require a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering, or a related technical discipline, or equivalent experience/training. To excel in this position, you should have at least 3 years of software solution development experience using agile, DevOps, and operating in a product model. Moreover, you should have 3+ years of data analytics experience using SQL and cloud development and data lake experience, preferably with Microsoft Azure. Preferred qualifications include 5+ years of software solution development experience, 5+ years of data analytics experience using SQL, 3+ years of full-stack development experience, and familiarity with Azure technologies. Additionally, skills, licenses, and certifications required for success in this role include expertise with the Azure Technology stack, practical direction within Azure Native cloud services, Azure Development Track Certification, Spark Certification, and a combination of Development, Administration & Support experience with various tools/platforms such as Scripting (Python, Spark, Unix, SQL), Data Platforms (Teradata, Cassandra, MongoDB, Oracle, SQL Server, ADLS, Snowflake), Azure Cloud Technologies, CI/CD tools (GitHub, Jenkins, Azure DevOps, Terraform), BI Analytics Tool Stack (Cognos, Tableau, Power BI, Alteryx, Denodo, Grafana), and Data Governance and Privacy tools (Alation, Monte Carlo, Informatica, BigID). Join us at American Airlines, where you can explore a world of possibilities, travel the world, grow your expertise, and become the best version of yourself while contributing to the transformation of technology delivery for our customers and team members worldwide.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
Salesforce is the global leader in customer relationship management (CRM) software, pioneering the shift to cloud computing. Today, Salesforce delivers the next generation of social, mobile, and cloud technologies to help companies revolutionize the way they sell, service, market, and innovate, enabling them to become customer-centric organizations. As the fastest-growing enterprise software company in the top 10, Salesforce has been recognized as the World's Most Innovative Company by Forbes and as one of Fortune's 100 Best Companies to Work For. The CRM Database Sustaining Engineering Team at Salesforce is responsible for deploying and managing some of the largest and most trusted databases globally. Customers rely on this team to ensure the safety and high availability of their data. As a Database Cloud Engineer at Salesforce, you will have a mission-critical role in ensuring the reliability, scalability, and performance of Salesforce's extensive cloud database infrastructure. You will contribute to powering one of the largest Software as a Service (SaaS) platforms globally. We are seeking engineers with a DevOps mindset and deep expertise in databases to architect and operate secure, resilient, and high-performance database environments across public cloud platforms such as AWS and GCP. Collaboration across various domains including systems, storage, networking, and applications is essential to deliver cloud-native reliability solutions at a massive scale. The CRM Database Sustaining Engineering team is a dynamic and fast-paced global team that delivers and supports databases and cloud infrastructure to meet the evolving needs of the business. In this role, you will collaborate with other engineering teams to deliver innovative solutions in an agile and dynamic environment. As part of the Global Team, you will engage in 24/7 support responsibilities within Europe, requiring occasional flexibility in working hours to align globally. You will be responsible for the reliability of Salesforce's cloud database, running on cutting-edge cloud technology. **Job Requirements:** - Bachelor's in Computer Science or Engineering, or equivalent experience. - Minimum of 8+ years of experience as a Database Engineer or in a similar role. - Expertise in Database and SQL performance tuning in at least one relational database. - Knowledge and hands-on experience with Postgres database is advantageous. - Broad and deep knowledge of at least two relational databases, including Oracle, PostgreSQL, and MySQL. - Working knowledge of cloud platforms such as AWS or GCP is highly desirable. - Experience with cloud technologies like Docker, Spinnaker, Terraform, Helm, Jenkins, GIT, etc. Exposure to Zookeeper fundamentals and Kubernetes is highly desirable. - Proficiency in SQL and at least one procedural language such as Python, Go, or Java, with a basic understanding of C. - Excellent problem-solving skills and experience with Production Incident Management and Root Cause analysis. - Experience with mission-critical distributed systems service, including supporting Database Production Infrastructure with 24x7x365 support responsibilities. - Exposure to a fast-paced environment with a large-scale cloud infrastructure setup. - Strong speaking, listening, and writing skills, attention to detail, and a proactive self-starter. **Preferred Qualifications:** - Hands-on DevOps experience including CI/CD pipelines and container orchestration (Kubernetes, EKS/GKE). - Cloud-native DevOps experience (CI/CD, EKS/GKE, cloud deployments). - Familiarity with distributed coordination systems like Apache Zookeeper. - Deep understanding of distributed systems, availability design patterns, and database internals. - Monitoring and alerting expertise using tools like Grafana, Argus, or similar. - Automation experience with tools like Spinnaker, Helm, and Infrastructure as Code frameworks. - Ability to drive technical projects from idea to execution with minimal supervision.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Team Lead in DevOps with 6+ years of experience, you will be responsible for managing, mentoring, and developing a team of DevOps engineers. Your role will involve overseeing the deployment and maintenance of applications such as Odoo (Python/PostgreSQL), Magento (PHP/MySQL), and Node.js (JavaScript/TypeScript). You will design and manage CI/CD pipelines using tools like Jenkins, GitHub Actions, and GitLab CI. Additionally, you will handle environment-specific configurations for staging, production, and QA. Your responsibilities will include containerizing legacy and modern applications using Docker and deploying them via Kubernetes (EKS/AKS/GKE) or Docker Swarm. You will implement and maintain Infrastructure as Code using tools like Terraform, Ansible, or CloudFormation. Monitoring application health and infrastructure using tools such as Prometheus, Grafana, ELK, Datadog, and ensuring systems are secure, resilient, and compliant with industry standards will also be part of your role. Optimizing cloud costs and infrastructure performance, collaborating with development, QA, and IT support teams, and troubleshooting performance, deployment, or scaling issues across tech stacks are essential tasks. To excel in this role, you must have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with hands-on experience. You should have a minimum of 2 years of experience managing or leading DevOps teams. Proficiency in supporting and deploying Odoo on Ubuntu/Linux with PostgreSQL, Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB, and Node.js with PM2/Nginx or containerized setups is required. Experience with AWS, Azure, or GCP infrastructure in production, strong scripting skills (Bash, Python, PHP CLI, or Node CLI), and a deep understanding of Linux system administration and networking fundamentals are essential. In addition, you should have experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and exposure to managing clients are crucial. Preferred certifications that are highly valued include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Additionally, experience with Magento Cloud DevOps or Odoo Deployment is considered a bonus. Nice-to-have skills include experience with multi-region failover, HA clusters, RPO/RTO-based design, familiarity with MySQL/PostgreSQL optimization, and knowledge of Redis, RabbitMQ, or Celery. Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower, as well as knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices, are also advantageous for this role.,
Posted 2 weeks ago
2.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a DevOps Manager, you will be responsible for leading our DevOps efforts across a suite of modern and legacy applications, including Odoo (Python), Magento (PHP), Node.js, and other web-based platforms. Your main duties will include managing, mentoring, and growing a team of DevOps engineers, overseeing the deployment and maintenance of various applications, designing and managing CI/CD pipelines, handling environment-specific configurations, containerizing applications, implementing and maintaining Infrastructure as Code, monitoring application health and infrastructure, ensuring system security and compliance, optimizing cloud cost and performance, collaborating with cross-functional teams, and troubleshooting technical issues. To be successful in this role, you should have at least 8 years of experience in DevOps/Cloud/System Engineering roles with real hands-on experience, including 2+ years of experience managing or leading DevOps teams. You should have experience supporting and deploying applications like Odoo, Magento, and Node.js, along with strong scripting skills in Bash, Python, PHP CLI, or Node CLI. Additionally, you should have a deep understanding of Linux system administration, networking fundamentals, AWS/Azure/GCP infrastructure, Git, SSH, reverse proxies, and load balancers. Good communication skills and client management exposure are also essential for this position. Preferred certifications for this role include AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Bonus skills that would be beneficial for this position include experience with multi-region failover, HA clusters, MySQL/PostgreSQL optimization, GitOps, VAPT 2.0, WCAG compliance, and infrastructure security best practices. In summary, as a DevOps Manager, you will play a crucial role in leading our DevOps efforts and ensuring the smooth deployment, maintenance, and optimization of various applications while collaborating with different teams and implementing best practices in infrastructure management and security.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
As a DevOps Engineer specializing in App Infrastructure & Scaling, you will be a valuable addition to our technology team. Your primary responsibility will be to design, implement, and maintain scalable and secure cloud infrastructure that supports our mobile and web applications. Your role is crucial in ensuring system reliability, performance, and cost efficiency across different environments. You will work with Google Cloud Platform (GCP) to design, configure, and manage cloud infrastructure. Your tasks will include implementing horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems. Additionally, you will be developing and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Real-time monitoring, crash alerting, logging systems, and health dashboards will be set up by you using industry-leading tools. Managing and optimizing Redis, job queues, caching layers, and backend request loads will also be part of your responsibilities. You will automate data backups, enforce secure access protocols, and implement disaster recovery systems. Collaborating with Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system load is crucial. Infrastructure security audits will be conducted by you to recommend best practices for preventing downtime and security breaches. Monitoring and optimizing cloud usage and billing to ensure a cost-effective and scalable architecture will also fall under your purview. You should have at least 3-5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP. Proficiency with Docker, Kubernetes, NGINX, and load balancing strategies is essential. Experience with CI/CD pipelines and tools like GitHub Actions, Jenkins, or GitLab CI is required. Familiarity with monitoring tools such as Grafana, Prometheus, NewRelic, or Datadog is expected. A deep understanding of API architecture, including rate limiting, error handling, and fallback mechanisms is necessary. Experience working with PHP/Laravel backends, Firebase, and modern mobile app infrastructure is beneficial. Working knowledge of Redis, Socket.IO, and message queuing systems like RabbitMQ or Kafka will be advantageous. Preferred qualifications include a Google Cloud Professional certification or equivalent. Experience in optimizing systems for high-concurrency, low-latency environments and familiarity with Infrastructure as Code (IaC) tools like Terraform or Ansible are considered a plus.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a DevOps Engineer, you will define and implement DevOps strategies that are closely aligned with the business goals. Your primary responsibility will be to lead cross-functional teams in order to enhance collaboration among development, QA, and operations teams. This involves designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate build, test, and deployment processes, thereby accelerating release cycles. Furthermore, you will be tasked with implementing and managing Infrastructure as Code using tools such as Terraform, CloudFormation, Ansible, among others. Your expertise will be crucial in managing cloud platforms like AWS, Azure, or Google Cloud. It will also be your responsibility to monitor and mitigate security risks in CI/CD pipelines and infrastructure, as well as setting up observability tools like Prometheus, Grafana, Splunk, Datadog, etc. In addition, you will play a key role in implementing proactive alerting and incident response processes. This will involve leading incident response efforts and conducting root cause analysis (RCA) when necessary. Documenting DevOps processes, best practices, and system architectures will also be part of your routine tasks. As a DevOps Engineer, you will continuously evaluate and implement new DevOps tools and technologies to enhance efficiency and productivity. Moreover, you will be expected to foster a culture of learning and knowledge sharing within the organization, promoting collaborative growth and development among team members.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Delivery Manager in Software Development at Cancard Inc and Advaa Health, you will play a crucial role in overseeing the end-to-end project ownership and ensuring timely and high-quality delivery of multiple products in the AI & Digital Health domain. With your expert-level understanding of the software development lifecycle, you will lead cross-functional agile teams and provide strategic guidance to developers working in Java, Angular, and MERN stack. Your responsibilities will include client & stakeholder management, technical leadership, team management & development, agile project management, quality assurance, delivery governance & reporting, risk identification & mitigation, resource planning, DevOps and Release Management, process optimization, documentation & knowledge management, innovation & technology road-mapping. You will be the primary point of contact for clients and business stakeholders, translating business needs into technical deliverables and maintaining strong, trust-based relationships. Your technical leadership will involve reviewing architecture, design patterns, and providing hands-on support to ensure high code quality and reliable delivery. By championing Agile best practices and driving continuous improvement initiatives, you will contribute to the successful delivery of complex software solutions in an Agile or hybrid environment. Your role will also involve staying up to date with industry trends, evaluating new tools and frameworks, and contributing to the technical roadmap and architecture decisions. To qualify for this role, you should have a Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field, with a strong development background in Java, Angular, and the MERN stack. You should possess a deep understanding of architectural patterns and design principles, along with a proven track record of successfully managing end-to-end delivery of software solutions. Relevant certifications in Agile methodologies, technical certifications, and strong communication skills are also required. Experience managing distributed/remote teams and proficiency with CI/CD tools and modern version control systems will be advantageous. In return, we offer a competitive salary and benefits package, flexible working hours, remote work options, a dynamic and supportive work environment, and opportunities for professional growth and development. You will have the chance to work on meaningful projects that have a real impact on healthcare. If you are passionate about solving critical healthcare challenges and delivering innovative products, we encourage you to submit your resume, cover letter, and relevant work samples or project portfolios to pooja@cancard.com. In your cover letter, please explain why you are interested in this role and how your background and experience align with our team's requirements.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Linux & Cloud Administrator at SAP, you will play a key role in supporting the seamless 24/7 operations of our cloud platform across Azure, AWS, Google Cloud, and SAP data centers. Your primary responsibilities will involve ensuring the smooth operation of business-critical SAP systems in the cloud, leveraging technologies such as Prometheus, Grafana, Kubernetes, Ansible, ArgoCD, AWS, GitHub Actions, and more. Your tasks will include network troubleshooting, architecture design, cluster setup and configuration, and the development of automation solutions to deliver top-notch cloud services for SAP applications to enterprise customers globally. You will be part of the ECS Delivery XDU team, responsible for the operation of SAP Enterprise Cloud Services (ECS) Delivery, a managed services provider offering SAP applications through the HANA Enterprise Cloud. At SAP, we are committed to building a workplace culture that values collaboration, embraces diversity, and is focused on creating a better world. Our company ethos revolves around a shared passion for helping the world run better, with a strong emphasis on learning and development, recognizing individual contributions, and offering a range of benefit options for our employees. SAP is a global leader in enterprise application software, with a mission to help customers worldwide work more efficiently and leverage business insights effectively. With a cloud-based approach and a dedication to innovation, SAP serves millions of users across various industries, driving solutions for ERP, database, analytics, intelligent technologies, and experience management. As a purpose-driven and future-focused organization, SAP fosters a highly collaborative team environment and prioritizes personal development, ensuring that every challenge is met with the right solution. At SAP, we believe in the power of inclusion and diversity, supporting the well-being of our employees and offering flexible working models to enable everyone to perform at their best. We recognize the unique strengths that each individual brings to our company, investing in our workforce to unleash their full potential and create a more equitable world. As an equal opportunity workplace, SAP is committed to providing accessibility accommodations to applicants with disabilities and promoting a culture of equality and empowerment. If you are interested in joining our team at SAP and require accommodation during the application process, please reach out to our Recruiting Operations Team at Careers@sap.com. We are dedicated to fostering an environment where all talents are valued, and every individual has the opportunity to thrive.,
Posted 2 weeks ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a proactive and detail-oriented L1 DataOps Monitoring Engineer to support our data pipeline operations. This role involves monitoring, identifying issues, raising alerts, and ensuring timely communication and escalation to minimize data downtime and improve reliability. Roles and Responsibilities Key Responsibilities: Monitor data pipelines, jobs, and workflows using tools like Airflow, Control-M, or custom monitoring dashboards. Acknowledge and investigate alerts from monitoring tools (Datadog, Prometheus, Grafana, etc.). Perform first-level triage for job failures, delays, and anomalies. Log incidents and escalate to L2/L3 teams as per SOP. Maintain shift handover logs and daily operational reports. Perform routine system checks and health monitoring of data environments. Follow predefined runbooks to troubleshoot known issues. Coordinate with application, infrastructure, and support teams for timely resolution. Participate in shift rotations including nights/weekends/public holidays. Skills and Qualifications: Bachelor's degree in Computer Science, IT, or related field (or equivalent experience). 0–2 years of experience in IT support, monitoring, or NOC environments. Basic understanding of data pipelines, ETL/ELT processes. Familiarity with monitoring tools (Datadog, Grafana, CloudWatch, etc.). Exposure to job schedulers (Airflow, Control-M, Autosys) is a plus. Good verbal and written communication skills. Ability to remain calm and effective under pressure. Willingness to work in a 24x7 rotational shift model. Good to Have (Optional): Knowledge of cloud platforms (AWS/GCP/Azure) Basic SQL or scripting knowledge (Shell/Python) ITIL awareness or ticketing systems experience (e.g., ServiceNow, JIRA)
Posted 2 weeks ago
3.0 - 7.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Job Title: SDE-2/3 LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 2 weeks ago
3.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job TitleAssociate III-Software Engineering LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 2 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Mumbai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : DevOps, Docker (Software), Kubernetes, Microsoft SQL Server Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Application deployment and level 1&2 support during SIT, UAT and production implementation for the core banking system of the bank (for Asia, Europe, Middle East and Africa).- Follow UAT issues and enquiries from business users until their appropriate closure (incident communication, root cause analysis and permanent resolution).- Engagement with development teams on a regular basis. Appropriate escalation of critical queries.- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Coordinate with stakeholders to gather requirements- Ensure timely delivery of projects Professional & Technical Skills: - Must Have Skills: Proficiency in Java, SQL, UNIX, Shell scripting, Control M.- Hands on experience on Jetty, DevOps, Docker (Software), Microsoft SQL Server, Kubernetes- Hands on experience on RDBMS Oracle 12/19, PL/SQL.- Hands on experience on Docker EE, Kubernetes, Argo CD.- Exposure in DevOps tools like Jenkins, Master Deploy etc.- Exposure in Elastic search (KIBANA), Grafana- Exposure in ETL tools is an advantage.- Exposure to SRE- Good knowledge of all phases of the system development and implementation life cycle (SDLC).- Strong understanding of CI/CD pipelines- Experience with cloud platforms such as AWS or Azure- Knowledge of infrastructure as code tools like Terraform Additional Information:- The candidate should have a minimum of 8 years of experience in DevOps- This position is based at our Mumbai office- A 15 years full-time education is required Qualification 15 years full time education
Posted 2 weeks ago
5.0 - 10.0 years
9 - 13 Lacs
Mumbai
Work from Office
Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : DevOps, Docker (Software), Kubernetes, Microsoft SQL Server Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Application deployment and level 1&2 support during SIT, UAT -Production implementation for the core banking system of the bank (for Asia, Europe, Middle East and Africa).- Follow UAT issues and enquiries from business users until their appropriate closure (incident communication, root cause analysis and permanent resolution).- Engagement with development teams on a regular basis. Appropriate escalation of critical queries.- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Coordinate with stakeholders to gather requirements- Ensure timely delivery of projects - Must Have Skills: Proficiency in Java, SQL, UNIX, Shell scripting, Control M.- Hands on experience on Jetty, DevOps, Docker (Software), Microsoft SQL Server, Kubernetes- Hands on experience on RDBMS Oracle 12/19, PL/SQL.- Hands on experience on Docker EE, Kubernetes, Argo CD.- Exposure in DevOps tools like Jenkins, Master Deploy etc.- Exposure in Elastic search (KIBANA), Grafana- Exposure in ETL tools is an advantage.- Exposure to SRE- Good knowledge of all phases of the system development and implementation life cycle (SDLC).- Strong understanding of CI/CD pipelines- Experience with cloud platforms such as AWS or Azure- Knowledge of infrastructure as code tools like Terraform Additional Information:- The candidate should have a minimum of 5 years of experience in DevOps- This position is based at our Mumbai office- A 15 years full-time education is required Qualification 15 years full time education
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough