Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 30 Lacs
Pune, Delhi / NCR, Bengaluru
Work from Office
Description: Job Summary: We are seeking a passionate and experienced Fullstack Developer with strong hands-on skills in React.js, Node.js, GraphQL, and AWS Cloud. You will play a key role in designing, developing, and maintaining scalable applications for global enterprise-grade platforms. Requirements: Required Skills & Experience: Minimum 6 years of hands-on experience in Fullstack Development. Strong proficiency in React.js (Hooks, Redux, Functional Components). Expertise in Node.js (Express.js or similar frameworks). Solid experience in GraphQL API design and integration. Deep understanding and working experience with AWS Cloud Services (Lambda, EC2, S3, API Gateway, CloudFormation, etc.). Familiarity with CI/CD, Git, and modern DevOps practices. Strong problem-solving skills and ability to work independently in a fast-paced environment. Job Responsibilities: Key Responsibilities: Design, develop, and maintain robust and scalable web applications using React.js and Node.js. Develop and integrate GraphQL APIs for efficient data querying and communication. Implement cloud-native solutions on AWS, ensuring high availability, scalability, and security. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Ensure high-quality code through unit testing, code reviews, and CI/CD pipelines. Troubleshoot and debug issues across the full stack (frontend to backend to cloud). Optimize application performance and implement best practices for scalability. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 month ago
4.0 - 7.0 years
5 - 10 Lacs
Bengaluru
Work from Office
1 Have a good understanding of AWS services specifically in the following areas RDS, S3, add EC2, VPC, KMS, ECS, Lambda, AWS Organizations and IAM policy setup. Also Python as a main skill. 2 Architect/design/code database infrastructure deployment using terraform. Should be able to write terraform modules that will deploy database services in AWS 3 Provide automation solutions using python lambda's for repetitive tasks such as running quarterly audits, daily health checks in RDS in multiple accounts. 4 Have a fair understanding of Ansible to automate Postgres infrastructure deployment and automation of repetitive tasks for on prem servers 5 Knowledge of Postgres and plpgsql functions6 Hands on experience with Ansible and Terraform and the ability to contribute to ongoing projects with minimal coaching.
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
The application engineer is a member of the CRM organization. This person contributes to bookings growth and customer success through participation in CRM business teams as an application developer. This person builds business processes, framework features and supporting functions based upon identified business requirements and use cases. The person functions in Scrum teams with other professionals focused on building, maintaining, and supporting solutions frameworks for the CRM industry. Pega is changing the way the world builds software, and our goal is to be the no. 1 CRM SaaS company in the world. In this role, you'll help us design, develop, implement new enhancements for the applications. What You'll Do At Pega Develop worlds best CRM applications. Adhere to Pega development Best Practices Work as part of a collaborative Agile team working in a SCRUM model surrounded by fun loving talented engineers. Technologies you will work on AWS, JS React, Node js, REST Services, REACT, Dynamo DB, S3, Cloudwatch. Take ownership of the components/tasks and make sure they are delivered with great quality Exhibit thought leadership and ready to suggest product and process improvements Resolve customer issues either by providing technical guidance or issue formal fixes Who You Are You are an experienced professional with a strong commitment to customer success without compromising integrity. You are a problem-solver who thrives in a collaborative team environment who wants to focus on building the next-generation solutions. You are skilled in both front end technologies and AWS cloud services. What You've Accomplished 4+ years of software development and design experience in AWS and UI technologies Prominent development experience in JavaScript/Typescript, Node JS, React JS, REST API, GraphQL(optional) experience is highly desired. Deep understanding and hands on experience on AWS - Amplify, API Gateway, DynamoDB, S3, Cloudwatch, Lambda, Codepipeline, SQS AWS IaC framework - CDK Experience with CI/CD, Git, Debugging Bachelors degree in engineering or similar field Very good presentation and communication skills Excellent problem-solving skills Passionate about learning new technologies and constant desire for innovation Should be able to take ownership of the deliverables assigned and would be able to deliver with no or minimal guidance. Partner with internal clients, like Product Managers, to deliver World-class software
Posted 1 month ago
0.0 - 3.0 years
0 Lacs
bhubaneswar
On-site
The role of an Intern/Fresher in JAVA INTERNSHIP is ideal for individuals with 0-1 year of experience, particularly Btech students seeking a 6-month internship in their final year. Located in Bhubaneswar, this position offers a valuable opportunity for aspiring Java Interns to engage in practical projects and enhance their skills with contemporary technologies, all under the mentorship of seasoned developers. As a Java Intern, your primary responsibilities will include assisting in the development and upkeep of Java applications, writing and troubleshooting basic Java code, collaborating with team members to acquire best practices, engaging in testing and documentation tasks, as well as researching and contributing to the resolution of technical challenges. To excel in this role, you should be pursuing or have recently completed a Bachelor's Degree in Computer Science, IT, or a related field. Additionally, having a fundamental understanding of Core Java, a strong grasp of object-oriented programming concepts, familiarity with Java Collection classes (e.g., List, Set, Map), basic knowledge of Exception Handling in Java, awareness of Java Streams and Lambda expressions, a keen interest in exploring new technologies, adept problem-solving and analytical skills, as well as the ability to work collaboratively in a team environment are essential. Furthermore, it would be beneficial to have exposure to Spring or Spring Boot, basic knowledge of SQL, and experience with version control tools like Git. Please note that this internship opportunity is a paid position, offering you the chance to refine your skills in set, map, Spring, Java Collection classes, analytical thinking, Java Collection, List, SQL, Spring Boot, object-oriented programming, Java, Git, problem-solving, teamwork, Exception Handling, documentation, testing, Core Java, Java Streams, and Lambda expressions.,
Posted 1 month ago
4.0 - 7.0 years
6 - 9 Lacs
Noida, India
Work from Office
1. Design and manage cloud-based systems on AWS. 2. Develop and maintain backend services and APIs using Java. 3. Basic knowledge on SQL and able to write SQL queries. 4. Good hands on Docker file and multistage docker 5. Implement containerization using Docker and orchestration with ECS/Kubernetes. 6. Monitor and troubleshoot cloud infrastructure and application performance. 7. Collaborate with cross-functional teams to integrate systems seamlessly. 8. Document system architecture, configurations, and operational procedures. Need Strong Hands-on Knowledge: * ECS, ECR, NLB, ALB, ACM, IAM, S3, Lambda, RDS, KMS, API Gateway, Cognito, CloudFormation. Good to Have: * Experience with AWS CDK for infrastructure as code. * AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Developer). * Pyhton Mandatory Competencies Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Database - Other Databases - PostgreSQL Beh - Communication DevOps/Configuration Mgmt - Cloud Platforms - AWS
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Noida, India
Work from Office
Full-stack developer with 5-8 years of experience in designing and developing robust, scalable, and maintainable applications applying Object Oriented Design principles . Strong experience in Spring frameworks like Spring Boot, Spring Batch, Spring Data etc. and Hibernate, JPA. Strong experience in microservices architecture and implementation Strong knowledge of HTML, CSS and JavaScript, React Experience with SOAP Web-Services, REST Web-Services and Java Messaging Service (JMS) API. Familiarity designing, developing, and deploying web applications using Amazon Web Services (AWS). Good experience on AWS Services - S3, Lambda, SQS, SNS, DynamoDB, IAM, API Gateways Hands on experience in SQL, PL/SQL and should be able to write complex queries. Hands-on experience in REST-APIs Experience with version control systems (e.g., Git) Knowledge of web standards and accessibility guidelines Knowledge of CI/CD pipelines and experience in tools such as JIRA, Splunk, SONAR etc . Must have strong analytical and problem-solving abilities Good experience in JUnit testing and mocking techniques Experience in SDLC processes (Waterfall/Agile), Docker, Git, SonarQube Excellent communication and interpersonal skills, Ability to work independently and as part of a team. Mandatory Competencies Programming Language - Java - Core Java (java 8+) Programming Language - Java Full Stack - HTML/CSS Fundamental Technical Skills - Spring Framework/Hibernate/Junit etc. Programming Language - Java - Spring Framework Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Git Programming Language - Java Full Stack - JavaScript DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Docker Beh - Communication and collaboration Cloud - AWS - AWS S3, S3 glacier, AWS EBS Database - Oracle - PL/SQL Packages Development Tools and Management - Development Tools and Management - CI/CD User Interface - Other User Interfaces - React Programming Language - Java Full Stack - Spring Framework Middleware - Java Middleware - Springboot Middleware - API Middleware - Microservices Middleware - API Middleware - WebServies (REST, SOAP) Middleware - API Middleware - API (SOAP, REST) Agile - Agile - SCRUM Database - Sql Server - SQL Packages
Posted 1 month ago
4.0 - 7.0 years
6 - 9 Lacs
Noida
Work from Office
Key Responsibilities: Develop responsive web applications using Angular. Integrate front-end applications with AWS backend services. Collaborate with UX/UI designers and backend developers in Agile teams. Develop and maintain responsive web applications using Angular framework. Create engaging and interactive web interfaces using HTML, CSS, and JavaScript Optimize web performance and ensure cross-browser compatibility Integrate APIs and backend systems to enable seamless data flow Required Skills: Strong proficiency in Angular and TypeScript. Experience with RESTful APIs and integration with AWS services. Knowledge of HTML, CSS, and JavaScript. Knowledge of version control systems like Git. Background in financial applications is a plus. Mandatory Competencies User Interface - Other User Interfaces - JavaScript DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Git Beh - Communication and collaboration User Interface - Angular - Angular Components and Design Patterns Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate UX - UX - Adobe XD Agile - Agile - SCRUM User Interface - HTML - HTML/CSS User Interface - Other User Interfaces - Typescript
Posted 1 month ago
7.0 - 11.0 years
13 - 18 Lacs
Noida
Work from Office
Must-Have Skills: Expertise in AWS CDK, Services(Lambda, ECS, S3) and PostgreSQL DB management. Strong understanding serverless architecture and event-driven design(SNS, SQS). Nice to have: Knowledge of multi-account AWS Setups and Security best practices (IAM, VPC, etc.), Experience in cost optimization strategies in AWS. Mandatory Competencies Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Database - Other Databases - PostgreSQL
Posted 1 month ago
4.0 - 6.0 years
6 - 10 Lacs
Coimbatore
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in anagile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact! Your Role and Responsibilities: Designs, develops and supports applications solutions with focus on HANA version of Advanced Business Application Programming (ABAP). This specialty may design, develop and/or re-engineer highly complex application components, and integrate software packages, programs and reusable objects residing on multiple platforms. This specialty may additionally have working knowledge of SAP HANA Technical Concept and Architecture, Data Modelling using HANA Studio, ABAP Development Tools (ADT), Code Performance Rules and Guidelines for SAP HANA, ADBC, Native SQL, ABAP Core data Services, Data Base Procedures, Text Search, ALV on HANA, and HANA Live models consumption Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4 - 6years of experience required. The ABAP on HANA Application Developers would possess the knowledge of the following topics and apply them to bring in value and innovation to client engagementsSAP HANA Technical Concept and Architecture, Data Modelling using HANA Studio, ABAP Development Tools (ADT), Code Performance Rules and Guidelines for SAP HANA, ADBC, Native SQL, ABAP Core data Services, Data Base Procedures, Text Search, ALV on HANA, and HANA Live models consumption. Designing and developing, data dictionary objects, data elements, domains, structures, views, lock objects, search helps and in formatting the output of SAP documents with multiple options. Modifying standard layout sets in SAP Scripts, Smart forms & Adobe Forms Development experience in RICEF (Reports, Interfaces, Conversions, Enhancements, Forms and Reports Preferred technical and professional experience Experience in working in Implementation, Upgrade, Maintenance and Post Production support projects would be an advantage Understanding of SAP functional requirement, conversion into Technical design and development using ABAP Language for Report, Interface, Conversion, Enhancement and Forms in implementation or support projects
Posted 1 month ago
3.0 - 5.0 years
3 - 8 Lacs
Noida
Work from Office
Roles & Responsibilities: Proficient in ReactJS including framework, GitHub, Git commands Develop code based on functional specifications through an understanding of project code Test code to verify it meets the technical specifications and is working as intended, before submitting to code review Experience in writing tests in ReactJs by using the React testing library OR JEST library Framework Experience in writing tests in Python by using Pytest Follow prescribed standards and processes as applicable to software development methodology, including planning, work estimation, solution demos, and reviews Read and understand basic software requirements Assist with the implementation of a delivery pipeline, including test automation, security, and performance Mandatory Skills ReactJS: React core concept, JavaScript Python: Python Flask, SQL Alchemy (ORM Structure this is mandatory), Pytest UT: React testing library for unit testing, JEST Data: Redux store or Apollo Database - Postgres SQL OR MySQL OR Any relational database. Expertise in object-oriented design and multi-threaded programming Good to have: Knowledge of Cloud like AWS Cloud, Lambda, S3, Dynamo DB Total Experience Expected: 04-06 years
Posted 1 month ago
5.0 - 9.0 years
8 - 12 Lacs
Noida
Work from Office
Must-Have Skills: Expertise in AWS CDK, Services(Lambda, ECS, S3) and PostgreSQL DB management. Strong understanding serverless architecture and event-driven design(SNS, SQS). Nice to have: Knowledge of multi-account AWS Setups and Security best practices (IAM, VPC, etc.), Experience in cost optimization strategies in AWS. Mandatory Competencies Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Database - Other Databases - PostgreSQL Beh - Communication and collaboration Cloud - AWS - AWS S3, S3 glacier, AWS EBS Development Tools and Management - Development Tools and Management - CI/CD Cloud - AWS - ECS
Posted 1 month ago
5.0 - 9.0 years
14 - 19 Lacs
Hyderabad
Work from Office
DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: The Development family is responsible for crafting, designing, deploying, and supporting applications, programs, and software solutions. May include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities related to software products used internally or externally on product platforms supported by the firm. The software development process requires in-depth domain expertise in existing and emerging development methodologies, tools, and programming languages. Software Developers work closely with business partners and / or external clients in defining requirements and implementing solutions. The Software Engineering role specializes in planning, detailing technical requirements, designing, developing, and testing all software systems and applications for the firm. Works closely with architects, product managers, project management, and end-users in the development and improvement of existing software systems and applications, proposing and recommending solutions that solve complex business problems. Your Primary Responsibilities: Act as a technical expert on one or more applications utilized by DTCC Work with the Business System Analyst to ensure designs satisfy functional requirements Partner with Infrastructure to identify and deploy optimal hosting environments Tune application performance to eliminate and reduce issues Research and evaluate technical solutions consistent with DTCC technology standards Align risk and control processes into day to day responsibilities to monitor and mitigate risk; escalates appropriately Apply different software development methodologies dependent on project needs Contribute expertise to the design of components or individual programs, and participate in the construction and functional testing Support development teams, testing, solving, and production support Build applications and construct unit test cases that ensure compliance with functional and non-functional requirements Work with peers to mature ways of working, continuous integration, and continuous delivery Aligns risk and control processes into day to day responsibilities to monitor and mitigate risk; raises appropriately Qualifications: Minimum of 7+ years of related experience Bachelor's degree preferred or equivalent experience Talents Needed for Success: Expert in Java/JEE and Coding standard methodologies Expert knowledge in development concepts. Good design and coding skills in Web Services, Spring/Spring Boot, Soap/Rest APIs, and Java Script Frameworks for modern web applications Builds collaborative teams across the organization. Communicates openly keeping everyone across the organization informed. Solid understanding of HTML, CSS, and modern JavaScript Experience with Angular V15+ and/or React. Experience integrating with database technologies such as Oracle, PostgreSQL, etc. Ability to write quality and self-validating code using unit tests and following TDD. Experience with Agile methodology and ability to collaborate with other team members. Bachelor's degree in technical field or equivalent experience. Fosters a culture where integrity and clarity are expected. Stays ahead of on changes in their own specialist area and seeks out learning opportunities to ensure knowledge is up-to-date. Nice to Have: Experience in developing and using Container, AWS cloud stack (S3, SQS, Redshift, Lambda etc.) is a big plus. Ability to demonstrate DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Cloudbees, Git, etc.
Posted 1 month ago
6.0 - 11.0 years
15 - 20 Lacs
Hyderabad
Work from Office
DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The impact you will have in this role : Being a member of IT Architecture and Enterprise services team, you will be responsible for ensuring that all applications and systems meet defined quality standards. You will develop, conduct, and evaluate testing processes, working closely with developers to remediate identified system defects. You will have in-depth knowledge of automated testing tools, and quality control and assurance approaches including the creation of reusable foundational test automation framework for the entire organization. You are also responsible for the validation of non-functional requirements for performance testing. Your Primary Responsibilities: Develop proven grasp of the DTCC Software Delivery process, Performance Test Engineering framework, CoE (Center of Excellence) Engagement process as well as understanding of tech-stack (tools and technologies) to perform day-to-day job Prepare, maintain, and implement performance test scenarios based on non-functional requirements and follow standard DTCC testing guidelines Automate performance test scenarios by using current automated functional and performance test scripts in compliance with the non-functional framework Maintain traceability across Non-Functional Requirements, Performance Test Scenarios and Defects Review of performance test scenarios from both a technical and business perspective with collaborators, such as development teams and business Track defects to closure, report test results, continuously supervise execution achievements and call out as required Contribute to the technical aspects of Delivery Pipeline adoption for performance testing and improve adoption Identify environmental and data requirements; collaborate with Development and Testing teams to manage and maintain environment and data Provide mentorship to team members related to performance test coverage, performance test scenarios, and non-functional requirements Develop a basic understanding of the product being delivered including architecture considerations and technical design of the supported applications in relations to the performance and scalability. Work with Development and Architecture teams and find opportunities to improve the test coverage. Share suggestions for performance improvements Aligns risk and control processes into day-to-day responsibilities to supervise and mitigate risk; raises appropriately **Note: Responsibilities of this role are not limited to the details above Qualifications: Minimum of 6 years of related experience Bachelor's degree and/or equivalent experience Talents Needed for Success: Expertise in LINUX/UNIX, shell scripting Experience using JMS/IBM-MQ messaging system and Administering the MQ systems Experience with understanding multi-technology end-to-end testing (distributed and mainframe systems) Experience in performance engineering (analysis, testing, and tuning) Experience developing n-tier, J2EE software applications Experience working in Unix or Linux environments Expertise in performance analysis of distributed platforms including Linux, Windows, AWS, Containers and VMware (tools: Dynatrace, AppDynamics, Splunk, CloudWatch, TeamQuest) Extensive knowledge of the functionality and performance aspects of the above computing platforms Experience in sophisticated statistical and analytical modeling Excellent analytical skills including: Data exploration, analysis and presentation applying descriptive statistics and graphical techniques Key Performance and volume metrics relationship modeling Understanding of queuing networks modeling and simulation modeling concepts and experience with one of the industry standard analytic modeling tools TeamQuest, Metron-Athenee, HyPerformix, and BMC Understanding of RESTful web service, JSON, and XML Experience in Relational Databases, preferably Oracle Experience with AWS services (Kinesis, Elastic Beanstalk, CloudWatch, Lambda, etc) Experience with CI/CD pipeline implementations, including testing, using Jenkins or similar tool Expert MS Office skills. Effective use of Excel statistical functions and sophisticated Power Point presentation skills Experience working with Agile teams (preferably scrum) Experience in the financial services industry is good to have
Posted 1 month ago
7.0 - 12.0 years
18 - 22 Lacs
Hyderabad
Work from Office
Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (Tuesdays, Wednesdays and a day unique to each team or employee). The Impact you will have in this role: The Development family is responsible for creating, designing, deploying, and supporting applications, programs, and software solutions. May include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities related to software products used internally or externally on product platforms supported by the firm. The software development process requires in-depth domain expertise in existing and emerging development methodologies, tools, and programming languages. Software Developers work closely with business partners and / or external clients in defining requirements and implementing solutions. The Software Engineering role specializes in planning, detailing technical requirements, designing, developing, and testing all software systems and applications for the firm. Works closely with architects, product managers, project management, and end-users in the development and improvement of existing software systems and applications, proposing and recommending solutions that solve complex business problems. What You'll Do: Lead needed technical processes and designs considering reliability, data integrity, maintainability, reuse, extensibility, usability and scalability. Collaborate with Infrastructure partners to identify and deploy optimal hosting environments. Define scalability and performance criteria for assigned applications. Ensure application meets the performance, privacy, and security requirements. Tune application performance to eliminate and reduce issues. Verify test plans to ensure compliance with performance and security requirements. Support business and technical presentations in relation to technology platforms and business solutions. Help develop solutions that balance cost and delivery while meeting business requirements. implement technology-specific best practices that are consistent with corporate standards. Partner with multi-functional teams to ensure the success of product strategy and project work. Handle the software development process. Drive new technical and business process improvements. Estimate total costs of modules/projects covering both hours and expense. Research and evaluate specific technologies, and applications, and gives to the solution design. Construct application Architecture encompassing end-to-end designs. Mitigates risk by following established procedures and monitoring controls, spotting key errors and demonstrating strong ethical behavior. Troubleshoot and debug system component(s) to resolve technical issues during critical production issues Partner with application support engineers to resolve critical production issues by participating in Major Incident calls and driving root cause analysis Participate in Disaster Recovery / Loss of Region events (planned and unplanned) performing tasks and collecting evidence. Qualification : Bachelors degree preferred or equivalent work experience. Talents Needed for Success: 10+ of Active Development Experience/ Expertise in Java/J2EE Based Applications proven ability with Hibernate, Spring, Spring MVC. Solid experience in Angular front end. Experience using NodeJS and NPM. Experience in Web based UI development and SPA Development. Experience with CSS, HTML, JavaScript, and similar UI frameworks (jQuery, React Angular). Exposure to XML/XSD, JSON and similar data presentation components. Familiarity with Microservices based architecture and distributed systems. Ability to develop and work with REST APIs using Spring Boot framework. Experience with CI/CD technologies like GIT, Jenkins, JaCoCo and Maven. Strong database and PL/SQL skills (Postgres preferred). Familiarity with Agile development methodology. Monitoring and Data Tools experience (Splunk, DynaTrace). Cloud Technologies (AWS services (S3, EC2, Lambda, SQS, IAM roles), Azure, OpenShift, RDS Aurora, Postgres). Good understanding of all software design patterns and understanding of Event driven architectures. Good understanding of testing methodologies and strategies. Good understanding of testing frameworks such as JUnit, TestNG and automation frameworks such as Selenium. Strong problem-solving skills with the ability to think creatively. Nice to have: Serves as a trusted coach or mentor within the organization. Communicates openly keeping everyone across the organization informed.
Posted 1 month ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics.The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter.As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform.If you"ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 month ago
14.0 - 20.0 years
15 - 20 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for a highly skilled and motivated Site Reliability Engineering (SRE) Manager to lead a team of SREs in designing, building, and maintaining scalable, reliable, and secure infrastructure and services. You will work closely with engineering, product, and security teams to improve system performance, availability, and developer productivity through automation and best practices. How will you make an impact? Build server-side software using Java Lead and mentor a team of SREs; support their career growth and ensure strong team performance. Drive initiatives to improve availability, reliability, observability, and performance of applications and infrastructure. Establish SLOs/SLAs and implement monitoring systems, dashboards, and alerting to measure and uphold system health. Develop strategies for incident management, root cause analysis, and postmortem reporting. Build scalable automation solutions for infrastructure provisioning, deployments, and system maintenance. Collaborate with cross-functional teams to design fault-tolerant and cost-effective architectures. Promote a culture of continuous improvement and reliability-first engineering. Participate in capacity planning and infrastructure scaling. Manage on-call rotations and ensure incident response processes are effective and well-documented. Work in a fast-paced, fluid landscape while managing and prioritizing multiple responsibilities Have you got what it takes? Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 10+ years of overall experience in SRE/DevOps roles, with at least 2 years managing technical teams. Proficiency in at least one programming language (e.g., Python, Go, Java, C#) and experience with scripting languages (e.g., Bash, PowerShell). Deep understanding of cloud computing platforms (e.g., AWS), the working and reliability constraints of some of the prominent services (e.g., EC2, ECS, Lambda, DynamoDB etc) Experience with infrastructure as code tools such as CloudFormation, Terraform. Deep understanding of CI/CD concepts and experience with CI/CD tools such as Jenkins, GitLab CI/CD, or CircleCI. Strong knowledge of containerization technologies (e.g., Docker, Kubernetes) and microservices architecture. Experience with monitoring and observability tools (e.g., Prometheus, Grafana, ELK). Working experience of Grafana Observability Suite (Loki, Mimir, Tempo). Experience in implementing OpenTelemetry protocol in Microservice environment. Excellent problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Experience of Incident management and blameless postmortems that includes driving the incident response efforts during outages and other critical incidents, resolution, and communication in a cross-functional team setup. Good to have skills: Handson experience of working with large Kubernetes Cluster. Certification will be an added plus. Administration and/or development experience of standard monitoring and automation tools such as Splunk, Datadog, Pagerduty Rundeck. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Certifications such as AWS Certified DevOps Engineer, Google Cloud Professional DevOps Engineer, or equivalent.
Posted 1 month ago
9.0 - 14.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics.The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter.As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform.If you"ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 month ago
6.0 - 10.0 years
15 - 25 Lacs
Chennai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As an AWS Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Key Responsibilities: 1. Data Pipeline Design & Development Design and develop scalable, resilient, and secure ETL/ELT data pipelines using AWS services. Build and optimize data workflows leveraging AWS Glue, EMR, Lambda, and Step Functions. Implement batch and real-time data ingestion using Kafka, Kinesis, or AWS Data Streams. Ensure efficient data movement across S3, Redshift, DynamoDB, RDS, and Snowflake. 2. Cloud Data Engineering & Storage Architect and manage data lakes and data warehouses using Amazon S3, Redshift, and Athena. Optimize data storage and retrieval using Parquet, ORC, Avro, and columnar storage formats. Implement data partitioning, indexing, and query performance tuning. Work with NoSQL databases (DynamoDB, MongoDB) and relational databases (PostgreSQL, MySQL, Aurora). 3. Infrastructure as Code (IaC) & Automation Deploy and manage AWS data infrastructure using Terraform, AWS CloudFormation, or AWS CDK. Implement CI/CD pipelines for automated data pipeline deployments using GitHub Actions, Jenkins, or AWS CodePipeline. Automate data workflows and job orchestration using Apache Airflow, AWS Step Functions, or MWAA. 4. Performance Optimization & Monitoring Optimize Spark, Hive, and Presto queries for performance and cost efficiency. Implement auto-scaling strategies for AWS EMR clusters. Set up monitoring, logging, and alerting with AWS CloudWatch, CloudTrail, and Prometheus/Grafana. 5. Security, Compliance & Governance Implement IAM policies, encryption (AWS KMS), and role-based access controls. Ensure compliance with GDPR, HIPAA, and industry data governance standards. Monitor data pipelines for security vulnerabilities and unauthorized access. 6. Collaboration & Stakeholder Engagement Work closely with data analysts, data scientists, and business teams to understand data needs. Document data pipeline designs, architecture decisions, and best practices. Mentor and guide junior data engineers on AWS best practices and optimization techniques. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience 7+ years of experience in data engineering with a focus on AWS cloud technologies. Expertise in AWS Glue, Lambda, EMR, Redshift, Kinesis , and Step Functions. Proficiency in SQL, Python, Java and PySpark for data transformations. Strong understanding of ETL/ELT best practices and data warehousing concepts. Experience with Apache Airflow or Step Functions for orchestration. Familiarity with Kafka, Kinesis, or other streaming platforms. Knowledge of Terraform, CloudFormation, and DevOps for AWS. Expertise in data mining, data storage, and Extract-Transform-Load (ETL) processes. Experience in data pipelines development and tooling, such as Glue, Databricks, Synapse, or Dataproc. Experience with both relational and NoSQL databases, including PostgreSQL, DB2, and MongoDB. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously while maintaining attention to detail. Communication skills: Ability to communicate with both technical and non-technical colleagues to derive technical requirements from business needs and problems. Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 month ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics.The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter.As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform.If you"ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 month ago
7.0 - 12.0 years
12 - 17 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Project description We're seeking a Senior React Developer with strong experience in TypeScript to build and maintain high-quality, performant user interfaces. The ideal candidate is passionate about clean code, UI/UX best practices, and collaborating in a modern, agile development environment. Experience with Node.js is a plus. Responsibilities Develop Scalable UIsBuild responsive, accessible, and maintainable web interfaces using React and TypeScript. Component ArchitectureDesign and implement reusable, modular components that follow best practices. State ManagementManage complex application state with tools like Redux, MobX, or Context API. API IntegrationCollaborate with backend teams to consume RESTful and/or GraphQL APIs. Performance OptimizationProfile and tune components to ensure optimal performance across devices and browsers. Testing & QualityWrite and maintain unit/integration tests using Jest, React Testing Library, or similar tools. Cross-functional CollaborationWork closely with designers, product managers, and fellow developers in an agile environment. Version ControlUse Git effectively in collaborative workflows (e.g., GitHub Flow). AI Tools (Optional)Leverage AI-assisted development tools like GitHub Copilot to improve productivity and code quality. Skills Must have 7+ years of professional expereince React.js ExpertiseDeep understanding of React's core concepts (hooks, lifecycle, reconciliation). TypeScript & JavaScriptProficient in modern JavaScript (ES6+) and strong TypeScript typing practices. HTML/CSS MasteryAbility to craft responsive, semantic, and accessible front-end code. State LibrariesExperience with Redux, MobX, Zustand, or similar state management tools. Version ControlStrong command of Git, branching strategies, and pull request best practices. TestingExperience with frontend testing tools such as Jest, Enzyme, or React Testing Library. Build ToolsFamiliarity with Webpack, Vite, Babel, or other front-end tooling systems. UI/UX AwarenessUnderstanding of usability principles and pixel-perfect implementation of designs. Problem-SolvingStrong debugging skills and ability to propose practical solutions. Nice to have Node.jsExperience building or integrating with Node.js APIs or services. AWSFamiliarity with AWS services (e.g., S3, EC2, ECS, R53, Lambda, CloudFront). CI/CD PipelinesExposure to modern deployment practices and automation tools. GraphQLFamiliarity with GraphQL clients (e.g., Apollo Client). Design SystemsExperience working with component libraries or design systems (e.g., MUI, Chakra UI, Storybook).
Posted 1 month ago
8.0 - 10.0 years
13 - 18 Lacs
Chandigarh
Work from Office
Job Description Full-stack Architect Experience 8 - 10 years Architect, design, and oversee the development of full-stack applications using modern JS frameworks and cloud-native tools. Lead microservice architecture design, ensuring system scalability, reliability, and performance. Evaluate and implement AWS services (Lambda, ECS, Glue, Aurora, API Gateway, etc.) for backend solutions. Provide technical leadership to engineering teams across all layers (frontend, backend, database). Guide and review code, perform performance optimization, and define coding standards. Collaborate with DevOps and Data teams to integrate services (Redshift, OpenSearch, Batch). Translate business needs into technical solutions and communicate with cross-functional stakeholders.
Posted 1 month ago
5.0 - 10.0 years
18 - 22 Lacs
Chennai
Work from Office
Project description We have an ambitious goal to migrate a legacy system written in HLASM (High-Level Assembler) from the mainframe to a cloud-based Java environment for one of the largest banks in the USA. Responsibilities We are looking for an experienced Senior DevOps who can Design, implement, and maintain scalable cloud infrastructure on AWS, leveraging services like EKS, Lambda, and S3; Develop and manage infrastructure-as-code using Terraform for efficient environment management; Build and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions; Manage Kubernetes clusters, including configuration, deployment, and monitoring (ArgoCD and Airflow expertise preferred); Support and optimize Java-based applications (vanilla Java, Spring Boot) throughout the build, package, and deployment lifecycle; Ensure database reliability and performance through PostgreSQL management and optimization; Collaborate with cross-functional teams to troubleshoot and resolve infrastructure and deployment challenges. Mandatory work from DXC office 5 days per week. Skills Must have 5+ years of experience in Cloud Engineering or related roles 3+ years of hands-on production experience with AWS, including services like EKS, Lambda, S3, Block Storage, and Networking (mandatory) Strong proficiency in Terraform for managing cloud environments and infrastructure Proven experience in designing and implementing CI/CD pipelines using tools such as Jenkins, GitLab CI, GitHub Actions, or similar technologies Solid understanding of Kubernetes, including experience with ArgoCD and Airflow Proficiency in Java, including vanilla Java and Spring Boot (ability to build, package, deploy, and debug) Experience with PostgreSQL, including database management and optimization Strong team player with a problem-solving mindset, adept at cross-functional collaboration Nice to have Familiarity with development of AI-powered tools to automate and optimize DevOps tasks Experience in cloud migration and modernization projects Experience implementing Platform Engineering practices
Posted 1 month ago
7.0 - 12.0 years
19 - 22 Lacs
Bengaluru
Work from Office
Project description Luxoft is one of the leading service provider for Banking and Capital Market customers. Luxoft has been engaged by an large Australian bank for providing L1/L2 Application Monitoring and Production Support services for business-critical applications and interfaces on 24/5 basis on a managed outcome basis in the Global Markets business area. We are looking for motivated individuals who have relevant skills & experience and are willing to work in shifts. Responsibilities Develop and maintain Unix shell scripts for automation tasks. Write and optimize Python scripts for process automation and data handling. Design, implement, and maintain scalable cloud infrastructure using AWS services (EC2, S3, Lambda, etc.). Monitor and troubleshoot cloud environments for optimal performance. Monitor and optimize system resources and automate routine administrative tasks and BAU tasks. Production Environment monitoring & Issue Resolution. Control SLA and notify management or the client in case of unexpected behavior. Support end-to-end data flows and health and sanity checks of the systems and applications. Escalate the issues (internally to Group lead/PM) with environment and application health. Logs review and data discovery in database tables for investigation of workflow failures. Investigate and supply analysis to fix application/configuration issues in the production environment. Contact/chase responsible support/upstream/downstream/cross teams and ask for root cause analysis from them on issues preventing end-to-end flow to work as designed. Regular update on issue status until addressed, notifying the client on status changes; expected time to address. Participate in ad-hoc/regular status calls on application health with the client to discuss critical defects/health check status. Working with business users service requests, which includes investigation of business logic and application behavior. Work with different data format transformation processes (XML, Pipeline). Work with source control tools (GIT/SVN) in order to investigate configuration or data transformation-related issues. Work with middleware and schedulers on data flow and batch process control. Focus on continuous proactive service improvement and continuous learning. Ensure customer service excellence and guaranteed response within SLA timeline by actively monitoring support emails/tickets and actively working on them till the issue is fully remediated. Ensuring all incident tickets are resolved in a timely and comprehensive manner. Track and identify frequently occurring, high-impact support issues as candidates for permanent resolution. Bachelor's Degree from a reputed university with good passing scores. Skills Must have 7 to 12 years as a L2/L3 Production Support along with Site Reliability Engineer having strong knowledge of Unix shell scripting Develop and maintain Unix shell scripts for automation tasks. Write and optimize Python or Shell scripts for process automation and data handling. Good knowledge of any scripting language would be fine. Basic Knowledge on AWS services (EC2, S3, etc.). Monitor and optimize system resources and automate routine administrative tasks and BAU tasks. Good Understanding of Incident/Change/Problem Management process. Required Skills: Strong experience with Unix Shell Scripting. Proficiency in Python Scripting for automation. Proficiency in any scripting language and have hands-on experience in automation. Strong Knowledge of Database Basic understanding of AWS services and cloud Basic knowledge and experience supporting cloud applications. Ability to troubleshoot and resolve technical issues in a Production Environment. Nice to have Preferred Skills (Optional) Experience with containers (Docker, Kubernetes). Familiarity with CI/CD pipelines, version control systems (e.g., Git). Knowledge of Infrastructure-as-Code tools like Terraform. Strong problem-solving and communication skills.
Posted 1 month ago
6.0 - 11.0 years
8 - 13 Lacs
Chennai
Work from Office
About the Team: We are a motivated team in central R&D at CVS helping to change the game through product digitalization and vehicle intelligence. Our focus is on building solutions for truck, bus and trailer OEMs considering both onboard and offboard (SaaS & PaaS) needs and requirements. Purpose: Connect the vehicle (Cyber) secure the vehicle Master the vehicle architecture Diagnose the vehicle Gain intelligence from the vehicle What you can look forward to as Fullstack Developer Design, develop, and deploy scalable applications using AWS Serverless (Lambda, API Gateway, DynamoDB, etc.) and Container technologies (ECS, EKS, Fargate). Build and maintain RESTful APIs and microservices architectures in .NET core (Entity Framework) Write clean, maintainable code in Node.js, JavaScript, C#, or React JS or React Native. Work with both SQL and NoSQL databases to design efficient data models. Apply Object-Oriented Analysis (OOA) and Object-Oriented Design (OOD) principles in software development. Utilize multi-threading and messaging patterns to build robust distributed systems. Collaborate using GIT and follow Agile methodologies and Lean principles. Participate in code reviews, architecture discussions, and contribute to continuous improvement. Your profile as Tech Lead: Bachelors or Masters degree in Computer Science or a related field. Minimum 6+ years of hands-on software development experience. Strong understanding of AWS cloud hosting technologies and best practices. Proficiency in at least one of the following: Node.js, JavaScript, C#, React (JS / Native). Experience with REST APIs, microservices, and cloud-native application development. Familiarity with design patterns, messaging systems, and distributed architectures. Strong problem-solving skills and a passion for optimizing business solutions. Why should you choose ZF Group in India? Innovation and Technology Leadership: ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture: ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities, and a clear path for advancement. Global Presence: As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. flexible work arrangements, and a supportive work-life balance.
Posted 1 month ago
10.0 - 12.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Your Impact: The Lead Site Reliability Engineer (SRE) will be responsible for ensuring the availability, reliability, and scalability of cloud infrastructure and services. This role focuses on automation, performance optimization, incident response, and CI/CD pipeline management to support highly available and resilient applications. The ideal candidate will bring deep expertise in AWS, Kubernetes, GitLab CI/CD, and Infrastructure as Code (IaC). What The Role Offers: Architect, deploy, and maintain highly available and scalable cloud environments in AWS. Design and manage Kubernetes clusters (EKS) and containerized applications with Docker. Implement auto-scaling, load balancing, and fault tolerance for cloud services. Develop and optimize Infrastructure as Code (IaC) using Terraform, Tofu, or Ansible. Design, implement, and maintain CI/CD pipelines using GitLab CI/CD and ArgoCD. Automate deployment workflows, infrastructure provisioning, and release management. Ensure secure, compliant, and automated software delivery across multiple environments. Implement observability and monitoring using tools like CloudWatch, Prometheus, Grafana, ELK, or Datadog. Analyze system performance, detect anomalies, and optimize cloud resource utilization. Drive incident response and root cause analysis, ensuring fast recovery (MTTR) and minimal downtime. Establish Service Level Objectives (SLOs) and error budgets to maintain system health. Implement security best practices, including IAM policies, encryption, network security, and vulnerability scanning. Automate patch management and security updates for cloud infrastructure. Ensure compliance with industry standards and regulations (SOC2, ISO27001, HIPAA, etc.). Work closely with DevOps, security, and development teams to drive reliability best practices. Lead blameless postmortems and continuously improve operational processes. Provide mentorship and training to junior engineers on SRE principles and cloud best practices. Participate in on-call rotations, ensuring 24/7 reliability of production services. What You Need To Succeed: Bachelors degree in Computer Science, Engineering, or equivalent experience. 10-12 years of experience in Site Reliability Engineering (SRE), DevOps, or Cloud Engineering. Expertise in AWS Cloud Hands-on experience with EC2, VPC, RDS, S3, IAM, Lambda, and EKS. Strong Kubernetes knowledge Hands-on experience with EKS, Helm charts, and cluster management. CI/CD experience Proficiency in GitLab CI/CD, ArgoCD for automated software deployments. Infrastructure as Code (IaC) Experience with Terraform, Tofu Monitoring & Logging Familiarity with CloudWatch, Prometheus, Grafana, ELK, or Datadog. Scripting & Automation Proficiency in Python, Shell scripting, or Golang. Incident Management & Reliability Practices Experience with SLOs, SLIs, error budgets, and chaos engineering.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City