Jobs
Interviews

1875 Sqs Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsibilities Develop roadmaps for system and product growth, ensure timely execution and quality delivery Estimate engineering effort during multiple stages of the product life cycle Collaborate with the engineering teams in accomplishing architecture, design and implementation goals Focus on details of software development, design, implementation and debugging Have high technical competence, strong technical background with track record of individual technical accomplishments Ability to play the role of the architect for the team Strong sense of ownership/ can do attitude and high attention to detail Work with designers, business analysts, and product managers to estimate and plan projects in an Agile environment Skills & Experience Hands-on experience in developing, designing & scaling complex systems Backend: Primarily Node.js Databases: Primarily, MySQL, MongoDB and Redis (cache) Web: React Strong Experience in REST API based Microservices development & integration that includes long running orchestration services Preferred AWS Services experience across - S3, EC2, AWS Lambda, RDS, Route 53, SQS, CloudFront, CloudFormation, etc. Experience with continuous integration and deployment automation tools such as Jenkins, Ansible, Travis CI etc Experience with graphQL frameworks. Writing API test cases and technical documentation. Experience creating public facing APIs and integrating 3rd party services The ideal candidate should be willing to commit fully to the company and drive the company forward. Skills: cloudfront,node.js,react,graphql,cloudformation,route 53,rest api,jenkins,travis ci,mongodb,ansible,s3,rds,redis,microservices,aws,ec2,sqs,mysql,aws lambda

Posted 1 month ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Verto At Verto, we're on a mission to democratise global finance and empower businesses in Emerging Markets to reach the world. Founded by British-Nigerian entrepreneurs Ola Oyetayo and Anthony Oduu, our roots in Africa provided a first-hand understanding of the significant challenges businesses face with cross-border payments, from illiquid currencies and high fees to slow transactions. This deep-rooted insight is why Africa remains a core focus, as we're committed to bridging the gap between emerging and developed markets and fostering global economic growth. What started as an FX solution for the Nigerian Naira has evolved into a market-leading platform, enabling thousands of businesses to seamlessly transfer billions of dollars annually. We believe that where you do business shouldn’t determine your success or ability to scale. We're creating equal access to the easy payment and liquidity solutions that are already a given in developed markets. We're not alone in realising this crucial need; we're backed by world-class investors including Y-Combinator, Quona, and MEVP. Our impact has been recognised with accolades such as 'Fintech Start-Up of the Year' and the Milken-Motsepe Prize, a testament to our role in powering payments for some of the world's most disruptive startups. Join us as we continue to grow and transform global finance. Role Overview This role significantly impacts Verto by driving active development, designing robust RESTful APIs, and building highly scalable web applications and cloud services. The work directly contributes to the company's system architecture and fosters innovation through prototyping new ideas About The Role We are seeking a talented and motivated Fullstack Engineer to join our growing team What You’ll Be Doing Engaging in active development assignments Designing and implementing RESTful APIs Collaborating with a team to develop and test highly scalable web applications and services Contributing to the overall system architecture Prototyping and implementing new ideas What You Need 2+ years of professional experience in object-oriented languages Significant experience with NodeJS Knowledge of modern web application building libraries like Angular Working knowledge of developing scalable distributed cloud applications on AWS or other cloud platforms Strong skills in MySQL and relational databases Clear communication and articulation skills Best If You Have Experience with the MEAN stack Experience with fintech industry Worked in an early stage or growing stage start-up Majority experience with product-based companies Awareness of modern trends in distributed software application development Experience with various development tools, including AWS Codebuild, git, npm, Visual Studio Code, Serverless framework, Swagger Specs, Angular, Flutter, AWS Lambda, MongoDB, Redis, SQS, and Kafka Culture at Verto We’re a community of folks who care about their craft, collaborate with purpose, and enjoy the journey together General Perks Health & Life insurance, flexible work schedules, generous leave policy Additional Perks Gym membership, free lunch, car lease policy and a professional development budget You’ll Fit Right In If You Love asking “why?” Value solving problems over just completing tasks Understand sync vs. async communication practices Thrive in ambiguity and change Actively seek feedback Prioritise impact over activity Are fun to work with - we love good humour! About The Interview Process It will have (in no strict order) a chat with the talent team, an online assessment round, and 2 (technical + culture) interviews rounds.

Posted 1 month ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

About The Role : We are seeking a highly skilled and experienced Senior Software Engineer Java AWS to join our dynamic engineering team. This role demands a strong foundation in backend development with hands-on expertise in designing and deploying scalable, distributed systems on AWS. The ideal candidate is someone who thrives in a fast-paced environment, brings a deep understanding of microservices architecture, and is passionate about leveraging cloud-native technologies to build high performance applications. Key Responsibilities : - Design, develop, and maintain robust and scalable Java-based backend services using Java 8/11/17 and Spring Boot. - Architect and implement microservices adhering to best practices in fault tolerance, observability, and API design. - Integrate and manage messaging systems such as Kafka or Apache Camel to support asynchronous communication across services. - Develop and deploy cloud-native solutions on AWS, leveraging services such as EC2, ECS, S3, SQS, SNS, Lambda, DynamoDB, and CloudFormation. - Optimize application performance and ensure high availability and fault tolerance in cloud environments. - Collaborate with cross-functional teams including DevOps, QA, and Product Management to deliver high-quality software solutions. - Ensure adherence to software engineering best practices including CI/CD, automated testing, and infrastructure as code. - Provide technical mentorship to junior engineers and participate in code reviews and architectural discussions. Required Technical Skill Set : - Core Backend Development : Java (8, 11, or 17), Spring Boot, REST APIs - Architecture : Microservices, Event-Driven Architecture - Messaging Platforms : Apache Kafka or Apache Camel - Cloud Experience : Minimum 2 years of hands-on experience with AWS services, specifically : 1. Compute & Containerization : EC2, ECS 2. Storage : S3 3. Messaging : SQS, SNS 4. Compute Event Handling : AWS Lambda - NoSQL Database : DynamoDB - Infrastructure as Code : CloudFormation Preferred Attributes : - Strong problem-solving and analytical skills with a solution-oriented mindset. - Exposure to Agile/Scrum methodologies. - Excellent communication skills and the ability to work effectively within a team. - Prior experience in client-facing roles or enterprise-scale systems is an advantage.

Posted 1 month ago

Apply

9.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: IOT L2 Support Key Skills: AWS Microservices,Mobile/web app support Job Locations: NOida Experience: 4 – 9 Years Budget: Based on your Experience Education Qualification : Any Graduation Work Mode: Hybrid Employment Type: Contract Notice Period: Immediate - 15 Days Interview Mode: 2 Rounds of Technical Interview + Including Client round Job Description: IOT L2 Support: 1. IOT L2 support profile (at least 2-3 year experience)- a. Technical Troubleshooting: Provide advanced technical support for AWS IoT services, resolving complex issues related to device connectivity, data ingestion, security, and integration with other AWS services. b. Customer Interaction: Interact indirectly with customers to understand their technical issues, provide timely updates, and ensure customer satisfaction through effective communication and resolution of problems via JSM (Jira service Management) . c. Incident Management: Handle escalated cases from Level 1/Level 3/ Business support, taking ownership of issues and driving them to resolution while adhering to defined service-level agreements (SLAs). d. Root Cause Analysis: Perform thorough analysis of incidents, identifying root causes and implementing preventive measures to minimize recurring issues and improve service reliability. e. Documentation and Knowledge Sharing: Document troubleshooting steps (Confluence), resolutions, and best practices for internal knowledge base and customer-facing documentation, contributing to the overall improvement of support processes and customer experience. f. Any Experience- Experience in Jira, AWS Services (Lambda, Cloudwatch, Kinesis Stream, SQS, IoT Core) , NewRelic will be an advantage . 2. Cloud Operations (CloudOps) Profile (at least 4-5 year experience )- a. Infrastructure Management: i. Design, deploy, and manage cloud infrastructure solutions (AWS) ensuring scalability, reliability, and efficiency. b. Monitoring and Incident Response: i. Implement and maintain monitoring, alerting, and logging solutions to ensure proactive management of cloud resources. Respond to and resolve incidents in a timely manner to minimize downtime. c. Automation and Scripting: i. Develop and maintain infrastructure as code (IaC) using tools such as Terraform, CloudFormation, or Ansible. Automate routine tasks and processes to streamline operations and improve efficiency. ii. Knowledge of Python or node is mandatory to automate the manual operation tasks . d. Security and Compliance: i. Implement and enforce security best practices, including access controls, encryption, and compliance with industry standards (e.g., WAF, Device Defender etc). Conduct regular security audits and vulnerability assessments. e. Performance Optimization: i. Identify opportunities to optimize AWS cloud resources for cost and performance. Implement cost management strategies and recommend architectural improvements based on monitoring and analysis. f. Collaboration and Documentation: i. Work closely with cross-functional teams (e.g., Developers, DevOps engineers, Architects to support application deployment and troubleshooting. Maintain documentation of infrastructure configurations, procedures, and troubleshooting guides. g. Continuous Improvement: i. Stay current with industry trends, emerging technologies, and best practices in cloud operations. Drive initiatives for process improvement, automation, and scalability. Interested Candidates please share your CV to jyothi.a@people-prime.com

Posted 1 month ago

Apply

3.0 years

0 Lacs

Dehradun, Uttarakhand, India

Remote

Job description About Yogotribe Platform: Yogotribe is building a transformative digital platform dedicated to wellness, connecting seekers with a diverse range of yoga retreats, meditation centers, Ayurveda clinics, and holistic wellness experiences. Our strategic approach involves a robust initial deployment using Odoo as the core platform. The foundational Phase 1 is already established on a scalable and secure AmistacX Odoo and AWS backend infrastructure, fully integrated and stable on Amazon EC2. This setup provides a solid foundation for all Odoo functionalities, setting the stage for future evolution towards a microservices-driven architecture. We are seeking a talented and experienced External Odoo Developer to join us on a project basis. Your primary responsibility will be to rapidly develop professional and high-quality custom Odoo modules to complete all remaining functionalities within our existing, integrated AWS ecosystem. Role Summary: As an Odoo Developer for Yogotribe, you will be responsible for the design, development, and implementation of new custom Odoo modules and enhancements within our established Odoo 17.x environment. While the AWS backend integration is already in place and stable, you will focus on building the Odoo-side functionalities that utilize these existing integrations. This is a project-based assignment focused on delivering specific functionalities. Your ability to work independently, adhere to Odoo best practices, and effectively leverage the established AWS services through Odoo will be paramount to your success. Key Responsibilities: Custom Odoo Module Development: Design, develop, and implement new Odoo modules and features using Python, Odoo ORM, QWeb, XML, and JavaScript, aligned with project requirements to complete all envisioned functionalities. Leveraging Existing AWS Integrations: Develop Odoo functionalities that seamlessly interact with our already established AWS backend, utilizing existing integrations for services such as: Data storage (AWS S3 for attachments). Eventing and messaging (AWS SQS, AWS SNS). Email services (AWS SES). Interactions with AWS Lambda for AI/ML processing (e.g., Amazon Comprehend, Rekognition). Code Quality & Best Practices: Write clean, maintainable, well-documented, and efficient code, adhering to Odoo development guidelines and industry best practices. Testing & Debugging: Conduct thorough testing of developed modules, identify and resolve bugs, and ensure module stability and performance within the integrated Odoo-AWS environment. Documentation: Create clear and concise technical documentation for developed Odoo modules, including design specifications, API usage, and deployment notes. Collaboration: Work closely with the core team to understand project requirements, provide technical insights, and deliver solutions that meet business needs. Deployment Support: Assist in the deployment and configuration of developed Odoo modules within the AWS EC2 environment. Required Skills & Experience: Odoo Development Expertise (3+ years): Strong proficiency in Python development within the Odoo framework (ORM, API, XML, QWeb). Extensive experience in developing and customizing Odoo modules (e.g., sales, CRM, accounting, website, custom models). Familiarity with Odoo 17.0 development practices is highly desirable. Solid understanding of Odoo architecture and module structure. Understanding of Odoo on AWS: Proven understanding of how Odoo operates within an AWS EC2 environment. Familiarity with the use of existing AWS services integrated with Odoo, particularly S3, SQS/SNS, and SES. Knowledge of AWS IAM, VPC, Security Groups, and general cloud security concepts relevant to understanding the existing Odoo deployment. Database Proficiency: Experience with PostgreSQL, including schema design and query optimization. Version Control: Proficient with Git for source code management. Problem-Solving: Excellent analytical and debugging skills to troubleshoot complex Odoo functionalities within an integrated system. Communication: Strong verbal and written communication skills for effective collaboration in a remote, project-based setting. Independent Work Ethic: Proven ability to manage project tasks, deliver on time, and work effectively with minimal supervision. Desirable (Bonus) Skills: Experience with front-end technologies for Odoo website customization (HTML, CSS/Tailwind CSS, JavaScript frameworks). Knowledge of Odoo performance optimization techniques. Familiarity with CI/CD pipelines (e.g., AWS CodePipeline, CodeBuild, CodeDeploy) from an Odoo module deployment perspective. Understanding of microservices architecture concepts and patterns, especially in the context of a future migration from the Odoo monolith. Prior experience with AWS AI/ML services (e.g., Comprehend, Rekognition, Personalize, SageMaker, Lex) is a plus, specifically in how Odoo might interact with them via existing integrations. Assignment Type & Duration: This is a project-based assignment with clearly defined deliverables and timelines for specific Odoo module development. The initial project scope will be discussed during the interview process. The feasibility of support extension or future project engagements will be decided based on the successful outcome and quality of deliverables for the current project. To Apply: Please submit your resume outlining your relevant Odoo development experience at hr@yogotribe.com, and fill up the google form : https://docs.google.com/forms/d/e/1FAIpQLSfSIHIYvr1Vlq7a98YdMXdf_XLoZfSTi88FkCYtbtE5HLTgOQ/viewform?usp=header

Posted 1 month ago

Apply

5.0 years

0 Lacs

Jodhpur, Rajasthan, India

On-site

Job Type : Full-time Backend Requirements 5+ years of experience with Python. Hands-on experience with one or more frameworks: Flask, Django, or FastAPI. Proficiency with AWS services, including Lambda, S3, SQS, and CloudFormation. Experience with relational databases such as PostgreSQL or MySQL. Familiarity with testing frameworks like Pytest or NoseTest. Expertise in REST API development and JWT authentication. Proficiency with version control tools such as Git. Frontend Requirements 3+ years of experience with ReactJS. Thorough understanding of ReactJS and its core principles. Experience with state management tools like Redux Thunk, Redux Saga, or Context API. Familiarity with RESTful APIs and modern front-end build pipelines and tools. Proficient in HTML5, CSS3, and pre-processing platforms like SASS/LESS. Experience with modern authorization mechanisms, such as JSON Web Tokens (JWT). Familiarity with front-end testing libraries like Cypress, Jest, or React Testing Library. Experience in developing shared component libraries is a plus. (ref:hirist.tech)

Posted 1 month ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Overview CACTUS is a remote-first organization and we embrace an accelerate from anywhere culture. You may be required to travel to our Mumbai office based on business requirements or for company/team events. Join Cactus Labs, the R&D Cell of Cactus Communications, and play a pivotal role in shaping cutting-edge technological solutions. Cactus Labs is a high-impact, cross-functional team solving complex technical and business challenges that help us stay strategically competitive. We operate globally and work across domains such as AI/ML, with a focus on Generative AI (Text, Image, Audio), Language Understanding, Explainable AI, Big Data, and scalable MLOps/DevOps systems. As a core member of the team, you'll drive the solutioning and delivery of scalable systems. You’ll take ownership of critical projects/features and collaborate closely with product and research teams. If you thrive in ambiguity, enjoy solving high-impact problems, and are motivated by building systems that matter, this role is for you. Responsibilities Design and architect systems that integrate with a wide array of AWS cloud services, ensuring high availability, scalability, and fault tolerance. Build applications that incorporate Large Language Models (LLMs) and other generative AI systems, leveraging APIs or fine-tuned models as needed. Own end-to-end technical delivery across projects — from design and development to deployment and monitoring. Collaborate cross-functionally with Product Managers, Researchers, ML Engineers, and other stakeholders to define and deliver impactful solutions. Contribute to technical discussions, architecture decisions, and long-term technology planning. Stay up to date with emerging tools, technologies, and development practices, and proactively introduce improvements to elevate engineering quality and team productivity. Qualifications And Prerequisites 4+ years of hands-on software development experience with a strong command of Python. Demonstrated experience building applications powered by LLMs (e.g., OpenAI, Anthropic, Google, custom fine-tuned models). Practical experience with AWS services (e.g., Lambda, EC2, S3, DynamoDB, SQS, API Gateway, etc.). Strong understanding of RESTful API design, backend systems, and service integrations. Experience with Docker or Kubernetes in production environments. Solid grasp of microservices architecture and distributed systems. Comfortable working in fast-paced, ambiguous environments with shifting priorities. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills, with experience working in remote and cross-functional teams. Application Process Before applying, please ensure you meet the role requirements listed above and have legal authorization to work in the country where this role is advertised. Our selection process typically involves an initial screening by a recruiter, a technical assessment, and two to three interview rounds. For this role, please refer to the following: - Technical round with a panel of 2 interviewers for 1 hour (Virtual) Techno-functional round for 1 hour (Virtual) HR Business partner round for 30 minutes Equal Opportunity Our hiring practices reflect our commitment to providing equal opportunities and creating an environment where everyone can thrive, develop, and succeed. We celebrate the uniqueness of our team members and prohibit discrimination of any kind, based on race, color, religion, gender identity, sexual orientation, age, marital status, disability, or any other protected characteristic. Accelerating from Anywhere As a remote-first organization, these are essential attributes we look for in all our candidates. Taking ownership of your work with minimal supervision, showing strong ability to organize, prioritize and deliver results independently. Documenting work that brings everyone on the same page. Maturity to choose between synchronous and asynchronous collaboration. Effectively collaborating with colleagues across different time zones by setting dedicated hours for collaboration and keeping team members updated through your MS Teams status. About CACTUS Established in 2002, Cactus Communications (cactusglobal.com) is a leading technology company that specializes in expert services and AI-driven products which improve how research gets funded, published, communicated, and discovered. Its flagship brand Editage offers a comprehensive suite of researcher solutions, including expert services and cutting-edge AI products like Mind the Graph, Paperpal, and R Discovery. With offices in Princeton, London, Singapore, Beijing, Shanghai, Seoul, Tokyo, and Mumbai and a global workforce of over 3,000 experts, CACTUS is a pioneer in workplace best practices and has been consistently recognized as a great place to work. Together we, Power research. Empower people .

Posted 1 month ago

Apply

5.0 years

3 - 10 Lacs

India

Remote

Experience: 5+Years Location: India(Remote) What’s the opportunity? AiTrillion is looking for a Sr. Developer with experience in PHP Zend Framework CI NodeJS, Express Loopback Serverless Framework & CLI AWS Lambda API Gateway with MySQL MongoDB DynamoDB database experience What will you be doing? Strong Development/Management background with experience in Developing Products with large-scale user transactions and having critical business nature. Must be able to build the technical architecture/stack of the product from scratch including server interactions, scripts, deployment stage, features with a cost-effective manner. Hands-on implementation of the critical interfaces and complex modules of the systems and features as a solution architecture and design. Provide timely deliverables, estimates, and complete tasks in an Agile development environment. Reviewing code for coding standards and accuracy and functionality. Help the team to solve complex coding problems and troubleshoot the issues. What skills do you need? Minimum 5+years of Technical Lead experience. Handle on any MVC PHP Frameworks Like Zend Framework, Laravel Must have experience on NodeJS, Express, Loopback Good to have hands-on Serverless Framework & CLI, AWS Lambda, API Gateway, SQS, SNS, Step Functions Must have experience working with JavaScript technologies like NodeJS, AngularJS, React. Hands-on Relation, Non-relation, MYSQL, Data Lakes, HIVE, Apache Spark, MongoDB, Apache Cassandra, Streaming Analytics, In-Memory, NoSQL Database. Must have a good understanding of building and using REST APIs and Different Authentication Protocols. Good to have experience with Amazon Web Services (EC2, RDS Aurora, Lambda, API Gateway, S3, CloudFront ) Must have experience building microservices and customer-facing APIs. Must have a sound understanding of failure modes, resiliency patterns, and techniques to enable robust, self-healing architecture. Develop Business Domain-Driven Reusable Microservices. Knowledge of version control systems like GIT (mandatory). Experience with Google Cloud is a plus. Experience with the process of automation and load testing is a plus. At AiTrillion, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous peoples, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities. Skills:- MySQL, Laravel, CodeIgniter, NodeJS (Node.js) and Express

Posted 1 month ago

Apply

6.0 years

12 - 30 Lacs

India

Remote

Job Role- Senior Developer - Full Stack with Front End Focus Location- Remote (WFH) Experience- 6+ years Work Shift- Uk Shift Salary Budget- 15-30 Lakh(open for the right candidate) Note- must have a strong command of both written and spoken English. Job Description We are looking for a number of strong developers - full stack, with strong front-end experience. Company Profile- We are a full-service Digital Agency & Software Development Company based in UK. We develop high end bespoke software & applications, mobile apps and websites across all sectors, both commercial and not-for-profit. We Expect You To Have 6+ years professional experience Proficiency in building REST based APIs, using Laravel (PHP 7+) for complex workflow systems. Expertise building component-based web apps/SPAs in JavaScript / ES6 and TypeScript using Vue.js / Vue 2 & 3. Excellent knowledge of CSS and experience in writing stylesheets in SASS Proficient understanding of git Node.js and npm Strong troubleshooting, debugging, and problem-solving skills. An ability to quickly get to grips with new systems; whether that be shiny new tech or existing/legacy developments. Any Of The Following Bonus Skills Zend Framework 1, 2 & 3 Docker Mobile app development React Native ElasticSearch Redis DevOps GitLab pipelines Swagger / OpenAPI Severless Good understanding of core AWS components and how best to utilise them in projects i.e. S3, SQS, SES, Cloudfront etc. What You’ll Be Doing Working closely with your team to develop and deliver high-end solutions using Agile. Coding and deploying new features. Ensuring our web applications and components are accessible, responsive, performant, and bug-free for recent versions of web browsers across all popular platforms. Ensuring all code is readable, well documented and testable. Working with a fantastic group of individuals in a busy but relaxed working environment. Working on some new developments, some significant enhancements to existing system and some support activities Skills:- Javascript, Vue.js, PHP and Laravel

Posted 1 month ago

Apply

0 years

30 - 35 Lacs

India

On-site

Job Description Key Responsibilities Design, develop, and maintain serverless applications using AWS services such as Lambda, API Gateway, DynamoDB, and S3. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Build and maintain RESTful APIs to support web and mobile applications. Implement security best practices for AWS services and manage IAM roles and policies. Optimize application performance, scalability, and reliability through monitoring and testing. Write clean, maintainable, and efficient code following best practices and design patterns. Participate in code reviews, providing constructive feedback to peers. Troubleshoot and debug applications, identifying performance bottlenecks and areas for improvement. Stay updated with emerging technologies and industry trends related to serverless architectures and Python development. Qualifications Bachelors degree in Computer Science, Engineering, or related field, or equivalent experience. Proven experience as a Python backend developer, with a strong portfolio of serverless applications. Proficiency in AWS services, particularly in serverless architectures (Lambda, API Gateway, DynamoDB, etc.). Solid understanding of RESTful API design principles and best practices. Familiarity with CI/CD practices and tools (e.g., AWS CodePipeline, Jenkins). Experience with containerization technologies (Docker, Kubernetes) is a plus. Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written. Preferred Skills Experience with frontend technologies (JavaScript, React, Angular) is a plus. Knowledge of data storage solutions (SQL and NoSQL databases). AWS certifications (e.g., AWS Certified Developer Associate) are a plus. Skills:- Django, Flask, AWS Lambda, Amazon SQS, API gateway, Amazon S3, AWS Simple Notification Service (SNS) and Microservices

Posted 1 month ago

Apply

10.0 years

0 Lacs

India

On-site

We seek a Senior/Principal Backend Software Engineer with leadership past. You will work closely with the product owner, area architect, and your team members to clarify business needs and technical requirements and define how to support them best. Responsibilities Have a strong commitment to maintaining a high standard of technical excellence by emphasizing best practices and industry trends Actively code and contribute to ongoing features and issues Provide team leadership, technical guidance, and direction for integration platform Work with all stakeholders and enterprise architects to come up with the road map Collaborate with different stakeholders, run the scrum, manage the backlog Support your teams as an agile driver & coach of the software delivery process Actively participate in the recruitment and retention process, ensuring a healthy composition of the team Monitor and optimize budget costs related to product expenses, such as AWS, licenses etc. Qualifications Excellent English verbal and written communication skills Around 10 years of invaluable experience in Software Development in JVM related languages Experience and exposure in Apache Camel , Microservices, AWS, lambda Kibana, Elastic etc. A profound understanding of software engineering and design fundamentals, complemented by hands-on design and development expertise A track record of successful technical leadership Expertise in agile methodologies and practices Experience working in an internationally distributed environment Any experience with development of an integration platform will be a plus Technologies we leverage on : • Java 11+, Spring framework (Boot, Hibernate) • Apache Camel • Oracle, PostgreSQL • CI/CD with Jenkins pipeline • InfluxDB, Grafana, Sensu, ELK stack • infrastructure as a code, one-click deployment, C4 diagrams • Mesos/Marathon, Docker, Kubernetes • Amazon Web Services and cloud deployments (S3, SNS, SQS, RDS, DynamoDB, etc.), using tools such as Terraform or AWS CLI • Git, Scrum, Pair Programming, Peer Reviewing

Posted 1 month ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Title: Software Engineer - IT Location: Gurugram/Chennai, India Experience: 2+ Job Profile: Position Objective : The Engineer will be primarily responsible for ensuring timely execution of assigned client deliverables and successfully unit tests requirements. They will work in conjunction with a senior team member to ensure a fully integrated product is delivered to the client. The Engineer progresses in specializing and deepening his or her technical skill set and within Absence Management. Job Description: Major Opportunities and Decisions: (Describe the more difficult and/or complex challenges or opportunities and decisions faced in doing work, improving processes or meeting customer needs.) Project Planning, Tracking, & Reporting Contribute into the initial coding estimates. Support the team in project planning activities and in evaluating risks. Communicate regularly with the team about development changes, scheduling, and status. Design Understand assigned detailed (LLD) design and do code development Development & Support Work with the team to clarify and improve the design as required . Build the code of high-priority and complex systems according to the technical specifications, detailed design, maintainability, and coding and efficiency standards. Use code management processes and tools to avoid versioning problems. Ensure that the code does not affect the functioning of any external or internal systems. Testing & Debugging Write and execute the unit test cases and test each piece to verify the basic functionality before comprehensive testing. Debug and resolve any project, code, or interface-level problems. Fix function testing issues. Test high priority and high complexity functionality/issues with support as needed Documentation Create documentation for the code as per defined standards and processes. Work on peer review feedback of the technical documentation for the code as per defined standards and processes Process Management Adhere to the project and support processes. Adhere to best practices and comply with approved policies, procedures, and methodologies, such as the SDLC cycle for different project sizes. Participate in route cause analysis Skills and Knowledge : (Identify core competencies, key specialties, technical, and knowledge areas necessary to accomplish responsibilities and desired end results) Competencies/Skills: Individual Contributor Competencies Skills: Proficient in at least one of the following C# Asp.Net Core, Web Forms, Web APIs, Asp.Net MVC HTML/CSS/Java S cript /TypeScript Angular T- SQL Strong understanding of OOPS concepts Experience with Various common JavaScript libraries Responsive design Creating and consuming web services, W eb API, or WCF Secure website design and development Application architecture and design patterns MS SQL Server Writing Stored Procedures, triggers, functions, designing db schema Proficiency with code versioning tool like Git Entity Framework Creating interfaces for communication between different applications Nice to have Experience with Visual Studio 201 9 /20 22 Experience with SQL Server 201 6 /201 9/2022 Experience with automated unit testing and integration testing Experience with graceful degradation and/or progressive enhancement websites. Strong understanding of XML and JSON Familiarity with Continuous Integration Familiarity with AWS cloud services( SQS,S 3,SNS , ECS etc ) Knowledge: 2+ Years of experience in analyzing and understanding application storyboards and\or use cases and develop functional application modules Come up with approaches for a given problem statement Design, build and maintain efficient and reusable C#.net core code Design, build and maintain Microsoft .Net Web based applications Fix identified defects or observations that are potential impacts or risks for the functionality Ensure best possible performance and quality of the application using project and standard best practices Help maintain code quality using project quality standard (or using tools) Design and develop web user interfaces (good to know frameworks such as bootstrap) Debug and Troubleshoot problems in existing code Develop unit test cases and perform unit testing Work on creating database tables, stored procedures, functions etc Coordinate with AGILE team Maintain updates to JIRA with the latest changes and appropriate status . Education and Experience: (Identify types and length of education and experience needed to acquire the necessary skills and knowledge to accomplish the desired end results) Education: B E Computers , IT /MCA / MSc IT, Computer Science Experience: 2+ Years of experience in analyzing and understanding application storyboards and\or use cases and develop functional application modules . We offer you a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization. DISCLAIMER: Nothing in this job description restricts management's right to assign or reassign duties and responsibilities of this job to other entities; including but not limited to subsidiaries, partners, or purchasers of Alight business units.

Posted 1 month ago

Apply

6.0 years

17 - 20 Lacs

Noida, Uttar Pradesh, India

On-site

6+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Must be fluent in English communication. Desired Experience 3 years of Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Skills: orchestration tools (dagster, airflow, aws step functions),batch and streaming data processing,ci/cd pipelines,python,cloud,sql,etl/elt processes,cloud data warehousing (snowflake, big query, redshift),aws services (s3, dms, glue, athena),infrastructure as code (terraform, terragrunt),dbt,s3,cd,aws,pub-sub and queuing frameworks (aws kinesis, kafka, sqs, sns),etl,ci

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Delta Tech Hub: Delta Air Lines (NYSE: DAL) is the U.S. global airline leader in safety, innovation, reliability and customer experience. Powered by our employees around the world, Delta has for a decade led the airline industry in operational excellence while maintaining our reputation for award-winning customer service. With our mission of connecting the people and cultures of the globe, Delta strives to foster understanding across a diverse world and serve as a force for social good. Delta has fast emerged as a customer-oriented, innovation-led, technology-driven business. The Delta Technology Hub will contribute directly to these objectives. It will sustain our long-term aspirations of delivering niche, IP-intensive, high-value, and innovative solutions. It supports various teams and functions across Delta and is an integral part of our transformation agenda, working seamlessly with a global team to create memorable experiences for customers. KEY RESPONSIBILITIES: Collaborates with product team members (UX, architects, and product management) to create secure, reliable, scalable software solutions Writes custom code or scripts to automate infrastructure, monitoring services, test cases, to do “destructive testing” to ensure adequate resiliency in production Strong AWS experience with background in API, Microservices development Collaborate cross-functionally with business and other IT teams across Delta. Collaborate with the other IT teams or departments to execute tasks such as migrating web applications to AWS and / or other relevant tasks as assigned. Identifies unsecured code areas and implements fixes as they are discovered with or without tooling Identifies, implements, and shares technical solutions that can be used across the portfolio Identifies product enhancements to create a better experience for the end users Research and/or investigate technical issues impacting the organization and recommend solutions Provides application support for software running in production Proactively reviews the Performance and Capacity of all aspects of production: code, infrastructure, data, and message processing Triages high priority issues and outages as they arise Participates in learning activities around agile software development and development core practices, and mentors other team members in these best practices WHAT YOU NEED TO SUCCEED (MINIMUM QUALIFICATIONS): Bachelor’s degree in computer science, Information Systems or related field. Experienced in full stack Cloud-native development, RESTful APIs, and stateless microservices architectures Deep familiarity with building integrations between Salesforce and other platforms using REST APIs, Micro Services or other integration ETL tools such as GLUE and AirFlow. Proficient with Python, Java or Node JS 2+ years of experience in development of Cloud-native development, RESTful APIs, and stateless microservices architectures 2+ years of experience with Java 8/J2EE, the Spring framework or Python or Node JS 2+ years of experience in AWS with background in API, Microservices development. 2+ years of experience in of AWS services like Lambdas, S3, SQS, SNS, EC2, Code Pipeline, Athena, DynamoDB, RDS databases Experience with the core AWS services like Lambdas, S3, SQS, SNS, EC2, Code Pipeline, Athena, DynamoDB, RDS. Strong understanding of core AWS services and apply best practices regarding security and scalability. Strong understanding of networking fundamentals and virtual networks from a cloud point of view Knowledge and/or experience in working with 12-factor methodology and understanding its benefits, and able to demonstrate appropriate patterns to other team members Data modeling and query skills both for SQL (Oracle 11+ / PostgreSQL) and NoSQL (DynamoDB / Cassandra / MongoDB) Experience deploying applications in OpenShift / ROSA (or another Docker / Kubernetes container) Hands on experience of programming concepts such as OOPs in scripting languages like Java. Candidates should have hands on experience of writing, maintaining UI and API automated tests written in Java, JavaScript, C#, Python using various open-source testing libraries like Selenium, Cypress, REST Assured, etc. Hands on experience of building test automation framework from ground up using modular framework and design pattern like Page Object Model design pattern (POM) etc. Able to independently create and maintain automation test jobs and execute as part of a CI/CD pipeline. Experience of working in distributed agile teams using agile frameworks such as SCRUM , SAFe, XP etc. Knowledge of CI/CD and DevOps practices; with tools such as Git / Gitlab, Jira / VersionOne / Agility, Jenkins / Terkton, Gradle, Ansible Knowledge and/or experience with messaging solutions such as ActiveMQ or Kafka. Ability to clearly communicate and coordinate with peers, product owners, and cross functional teams and design a relevant and time to market solution. Must have the ability to listen to customers and colleagues; convey ideas effectively; prepare written documentation Ability to quickly adapt to new tools and evolving technologies. Proactive in nature with customer satisfaction as a primary goal Embraces Diverse people, thinking and styles Consistently makes safety and security, of self and others, the priority Design Thinking Ensure code quality and documentation for supporting application post deployment. WHAT WILL GIVE YOU A COMPETITIVE EDGE (PREFERRED QUALIFICATIONS): Experience with B2B Sales and Support, Contracting and Incentive, and Web Portal applications. Airline, or Transportation industry experience Experience in development of Lightening Web Components, with additional 2 or more years of Visual force, and / or Apex experience.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Delta Tech Hub: Delta Air Lines (NYSE: DAL) is the U.S. global airline leader in safety, innovation, reliability and customer experience. Powered by our employees around the world, Delta has for a decade led the airline industry in operational excellence while maintaining our reputation for award-winning customer service. With our mission of connecting the people and cultures of the globe, Delta strives to foster understanding across a diverse world and serve as a force for social good. Delta has fast emerged as a customer-oriented, innovation-led, technology-driven business. The Delta Technology Hub will contribute directly to these objectives. It will sustain our long-term aspirations of delivering niche, IP-intensive, high-value, and innovative solutions. It supports various teams and functions across Delta and is an integral part of our transformation agenda, working seamlessly with a global team to create memorable experiences for customers. Responsibilities: Collaborates with product team members (UX, architects, and product management) to create secure, reliable, scalable software solutions Writes custom code or scripts to automate infrastructure, monitoring services, test cases, to do “destructive testing” to ensure adequate resiliency in production Collaborate cross-functionally with business and other IT teams across Delta Champion development and integration standards, best practices, and their related deliverables Aim to deliver processes and components that can be maintained by the business into the future using native features and functions whenever possible Design, develop, and deploy Salesforce Apex classes, triggers, test methods, and Lightning web components to meet the business’s requirements Optimize and improve existing Salesforce implementations so that every team is achieving the maximum benefit from their investment in Salesforce Work to continually understand the new features added to Salesforce’s products and aid the business in regression testing as needed to avoid disruption from any Salesforce-side upgrade Reporting & Analytic application development in Salesforce Experience Cloud and CRMA, and AWS platforms, good understanding of API, Microservices development. Identifies unsecured code areas and implements fixes as they are discovered with or without tooling Identifies, implements, and shares technical solutions that can be used across the portfolio Identifies product enhancements to create a better experience for the end users Research and/or investigate technical issues impacting the organization and recommend solutions Provides application support for software running in production Proactively reviews the Performance and Capacity of all aspects of production: code, infrastructure, data, and message processing Triages high priority issues and outages as they arise Participates in learning activities around agile software development and development core practices, and mentors other team members in these best practices WHAT YOU NEED TO SUCCEED (MINIMUM QUALIFICATIONS): Bachelors Degree in Computer Science, Information Systems or related field is preferred. Deep familiarity with building integrations between Salesforce and other platforms using Apex REST APIs or other integration ETL tools such as GLUE and AirFlow. 2+ years of experience in development of Lightening Web Components, with additional 1 or more years of Visual force, and / or Apex experience Experienced with integration of Salesforce TCRM (tableau CRMA) design, development, and integration with Experience Cloud implementations. Knowledge of Cloud-native development, RESTful APIs, and stateless microservices architectures Knowledge of Jboss, Websphere, AWS with background in API, Microservices development. Knowledge of AWS services like Lambdas, S3, SQS, SNS, EC2, Code Pipeline, Athena, DynamoDB, RDS databases Knowledge of CI/CD and DevOps practices; with tools such as Git / Gitlab, Jira / VersionOne / Agility, Jenkins / Tekton, Gradle, Ansible Able to independently create and maintain automation test jobs and execute as part of a CI/CD pipeline. Strong understanding of networking fundamentals and virtual networks from a cloud point of view Experience debugging components and code built by other developers. Ability to clearly communicate and coordinate with peers, product owners, and cross functional teams and design a relevant and time to market solution. Must have the ability to listen to customers and colleagues; convey ideas effectively; prepare written documentation Ability to quickly adapt to new tools and evolving technologies. Proactive in nature with customer satisfaction as a primary goal Embraces Diverse people, thinking and styles Consistently makes safety and security, of self and others, the priority Design Thinking Ensure code quality and documentation for supporting application post deployment. WHAT WILL GIVE YOU A COMPETITIVE EDGE (PREFERRED QUALIFICATIONS): Salesforce Certifications are great and Trailhead badges are good, but real-world experience in Salesforce and other platforms is best

Posted 1 month ago

Apply

6.0 years

17 - 20 Lacs

Ahmedabad, Gujarat, India

On-site

6+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Must be fluent in English communication. Desired Experience 3 years of Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Skills: orchestration tools (dagster, airflow, aws step functions),batch and streaming data processing,ci/cd pipelines,python,cloud,sql,etl/elt processes,cloud data warehousing (snowflake, big query, redshift),aws services (s3, dms, glue, athena),infrastructure as code (terraform, terragrunt),dbt,s3,cd,aws,pub-sub and queuing frameworks (aws kinesis, kafka, sqs, sns),etl,ci

Posted 1 month ago

Apply

6.0 years

17 - 20 Lacs

Gandhinagar, Gujarat, India

On-site

6+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Must be fluent in English communication. Desired Experience 3 years of Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Skills: orchestration tools (dagster, airflow, aws step functions),batch and streaming data processing,ci/cd pipelines,python,cloud,sql,etl/elt processes,cloud data warehousing (snowflake, big query, redshift),aws services (s3, dms, glue, athena),infrastructure as code (terraform, terragrunt),dbt,s3,cd,aws,pub-sub and queuing frameworks (aws kinesis, kafka, sqs, sns),etl,ci

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview Of The Role An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field Proficiency in automation using Python Excellent oral and written communication skills Experience with SQL, ETL processes, or data transformation Preferred Qualifications Experience with scripting and automation tools Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions Understanding of cloud services, serverless architecture, and systems integration Key job responsibilities As a Business Intelligence Engineer in the team, you will collaborate closely with business partners, architect, design, implement, and BI projects & Automations. Responsibilities Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards. Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.) Development of dashboards and reports. Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Internal Job Description Retail Business Service, ARTS is a growing team that supports the Retail Efficiency and Paid Services business and tech teams. There is ample growth opportunity in this role for someone who exhibits Ownership and Insist on the Highest Standards, and has strong engineering and operational best practices experience. Basic Qualifications 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes. Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.) Expert-level SQL. Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.) Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Preferred Qualifications Experience with data-specific programming languages/packages such as R or Python Pandas. Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR. Knowledge of machine learning techniques and concepts. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3004831

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal Consultant - Databricks Architect! In this role, the Databricks Architect is responsible for providing technical direction and lead a group of one or more developer to address a goal. Responsibilities Architect and design solutions to meet functional and non-functional requirements. Create and review architecture and solution design artifacts. Evangelize re-use through the implementation of shared assets. Enforce adherence to architectural standards/principles, global product-specific guidelines, usability design standards, etc. Proactively guide engineering methodologies, standards, and leading practices. Guidance of engineering staff and reviews of as-built configurations during the construction phase. Provide insight and direction on roles and responsibilities required for solution operations. Identify, communicate and mitigate Risks, Assumptions, Issues, and Decisions throughout the full lifecycle. Considers the art of the possible, compares various architectural options based on feasibility and impact, and proposes actionable plans. Demonstrate strong analytical and technical problem-solving skills. Ability to analyze and operate at various levels of abstraction. Ability to balance what is strategically right with what is practically realistic . Growing the Data Engineering business by helping customers identify opportunities to deliver improved business outcomes, designing and driving the implementation of those solutions. Growing & retaining the Data Engineering team with appropriate skills and experience to deliver high quality services to our customers. Supporting and developing our people, including learning & development, certification & career development plans Providing technical governance and oversight for solution design and implementation Should have technical foresight to understand new technology and advancement. Leading team in the definition of best practices & repeatable methodologies in Cloud Data Engineering, including Data Storage, ETL, Data Integration & Migration, Data Warehousing and Data Governance Should have Technical Experience in Azure, AWS & GCP Cloud Data Engineering services and solutions. Contributing to Sales & Pre-sales activities including proposals, pursuits, demonstrations, and proof of concept initiatives Evangelizing the Data Engineering service offerings to both internal and external stakeholders Development of Whitepapers, blogs, webinars and other though leadership material Development of Go-to-Market and Service Offering definitions for Data Engineering Working with Learning & Development teams to establish appropriate learning & certification paths for their domain. Expand the business within existing accounts and help clients, by building and sustaining strategic executive relationships, doubling up as their trusted business technology advisor. Position differentiated and custom solutions to clients, based on the market trends, specific needs of the clients and the supporting business cases. Build new Data capabilities, solutions, assets, accelerators, and team competencies. Manage multiple opportunities through the entire business cycle simultaneously, working with cross-functional teams as necessary. Qualifications we seek in you! Minimum qualifications Excellent technical architecture skills, enabling the creation of future-proof, complex global solutions. Excellent interpersonal communication and organizational skills are required to operate as a leading member of global, distributed teams that deliver quality services and solutions. Ability to rapidly gain knowledge of the organizational structure of the firm to facilitate work with groups outside of the immediate technical team. Knowledge and experience in IT methodologies and life cycles that will be used. Familiar with solution implementation/management, service/operations management, etc. Leadership skills can inspire others and persuade. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience Experience in a solution architecture role using service and hosting solutions such as private/public cloud IaaS, PaaS, and SaaS platforms. Experience in architecting and designing technical solutions for cloud-centric solutions based on industry standards using IaaS, PaaS, and SaaS capabilities. Must have strong hands-on experience on various cloud services like ADF/Lambda, ADLS/S3, Security, Monitoring, Governance Must have experience to design platform on Databricks. Hands-on Experience to design and build Databricks based solution on any cloud platform. Hands-on experience to design and build solution powered by DBT models and integrate with databricks . Must be very good designing End-to-End solution on cloud platform. Must have good knowledge of Data Engineering concept and related services of cloud. Must have good experience in Python and Spark. Must have good experience in setting up development best practices. Intermediate level knowledge is required for Data Modelling. Good to have knowledge of docker and Kubernetes. Experience with claims-based authentication (SAML/OAuth/OIDC), MFA, RBAC , SSO etc. Knowledge of cloud security controls including tenant isolation, encryption at rest, encryption in transit, key management, vulnerability assessments, application firewalls, SIEM, etc. Experience building and supporting mission-critical technology components with DR capabilities. Experience with multi-tier system and service design and development for large enterprises Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. Exposure to infrastructure and application security technologies and approaches Familiarity with requirements gathering techniques. Preferred qualifications Must have designed the E2E architecture of unified data platform covering all the aspect of data lifecycle starting from Data Ingestion, Transformation, Serve and consumption. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain with total Must have designed and implemented at least 2-3 project end-to-end in Databricks. Must have experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o SQL Endpoint – Photon engine o Unity Catalog o Databricks workflows orchestration o Security management o Platform governance o Data Security Must have knowledge of new features available in Databricks and its implications along with various possible use-case. Must have followed various architectural principles to design best suited per problem. Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have strong understanding of Data warehousing and various governance and security standards around Databricks. Must have knowledge of cluster optimization and its integration with various cloud services. Must have good understanding to create complex data pipeline. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on designing both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test. Must have strong communication skills and have worked with cross platform team. Must have great attitude towards learning new skills and upskilling the existing skills. Responsible to set best practices around Databricks CI/CD. Must understand composable architecture to take fullest advantage of Databricks capabilities. Good to have Rest API knowledge. Good to have understanding around cost distribution. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Experience around DevSecOps including docker and Kubernetes. Software development full lifecycle methodologies, patterns, frameworks, libraries, and tools Knowledge of programming and scripting languages such as JavaScript, PowerShell, Bash, SQL, Java , Python, etc. Experience with data ingestion technologies such as Azure Data Factory, SSIS, Pentaho, Alteryx Experience with visualization tools such as Tableau, Power BI Experience with machine learning tools such as mlFlow , Databricks AI/ML, Azure ML, AWS sagemaker , etc. Experience in distilling complex technical challenges to actionable decisions for stakeholders and guiding project teams by building consensus and mediating compromises when necessary. Experience coordinating the intersection of complex system dependencies and interactions Experience in solution delivery using common methodologies especially SAFe Agile but also Waterfall, Iterative, etc. Demonstrated knowledge of relevant industry trends and standards Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 1, 2025, 6:40:20 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

15.0 years

2 - 8 Lacs

Gurgaon

On-site

Job Description: Group Benefits API Team is responsible for APIs and Integrations supporting Web Applications enabling Guardian’s Group Business. We support our end-to-end Business with APIs for Quote, Policy, Customer, Eligibility and Claims. Group Business intake channels such as Customer-facing portal applications, call center applications, IVR and chatbots are all powered by our APIs and provide unified customer experience. Our team is growing, and we are looking for versatile people skilled in Java based technologies, who are driven, who want to make a difference and code with purpose. We are passionate about the Customer. We do the right thing, believe people count and go above and beyond for the people we serve. You are: An experienced backend Senior Delivery Manager who has experience working in high-performance teams with a track record of delivering quality software with speed to market Highly skilled engineer with strong foundation of Computer Science and Software Development Life cycle concepts An individual who is collaborative and can work across the firm with Enterprise Architects, Business Product Owners, Platform, Security and Production support to enable delivery of a cohesive customer centric product You will: Build well-designed, well-engineered, robust, scalable software integration solutions that are Production ready Engineer application integrations using messaging solutions, events, and REST APIs Work closely with the Business Teams and Solution Architecture to ensure alignment with the Product Roadmap and design blueprint Mentor and coach developers and analysts, assign tasks, follow industry standard processes, and perform code reviews Triage end to end system integration issues across various middleware systems and infrastructure You have: Bachelor’s degree in Computer Science or related field 15+ years of Experience as J2EE Software Development Understanding of Agile and SAFE Methodologies Industry experience with implementing Rest APIs, Data Driven Design, Microservices Architecture, Enterprise Integration Patterns and Event Driven Architecture Experience designing user friendly APIs using Open API Swagger Specifications Experience integrating with messaging platforms such as ActiveMQ, IBM MQ Experience with Docker containers and AWS Cloud services S3, Lambda, SQS/SNS, Redis Cache, Open search. Experience using SQL and NoSQL Databases Apache Camel, Tibco Business Works, MuleSoft or similar EAI framework Experience with Rules engines such as OpenL, Drools or Blaze Experience with version control and CI/CD automation tools such as Git, Bitbucket, Jenkins and Maven Experience with API Security frameworks, token management and user access control including OAuth 2.0, JWT Experience with logging and monitoring tools such as Splunk, AppDynamics, Zenoss Location: This position can be based in any of the following locations: Chennai, Gurgaon Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Location: On-site - Bhubaneswar Experience: 3-5 Years Employment Type: Full-Time No of Positions - 4 Job Description: We are looking for multiple skilled and passionate Software Developers with a strong background in Node.js development and must have working knowledge of cloud platforms (AWS/Azure) . The ideal candidate has hands-on experience building serverless applications and working within a microservices architecture . Familiarity with NestJS is a strong plus. Responsibilities: Design, develop, and maintain scalable backend services using Node.js Build and deploy cloud-native applications on AWS or Azure Develop and maintain serverless functions (e.g., AWS Lambda / Azure Functions) Collaborate with cross-functional teams to build and integrate microservices Write clean, maintainable, and well-documented code Participate in code reviews, testing, and CI/CD processes Troubleshoot and optimize existing systems for performance and scalability Requirements: Must have experienced in applications in Production environment , can showcase his/her works Minimum 3+ years of experience in cloud and software development using Node.js Strong understanding of RESTful APIs , JSON , and modern backend practices Experience with AWS (Lambda, API Gateway, S3, DynamoDB, etc.) or Azure (Functions, API Management, Cosmos DB, etc.) Knowledge of serverless application architecture Hands-on experience working with microservices Familiarity with NestJS or willingness to learn Proficient in Git , CI/CD pipelines, and containerization (Docker a plus) Strong problem-solving skills and attention to detail Nice to Have: Experience with TypeScript Understanding of event-driven architectures (e.g., using SNS, SQS, or Azure Service Bus) Exposure to DevOps practices and Infrastructure as Code (IaC) tools (e.g., CloudFormation, Terraform)

Posted 1 month ago

Apply

0 years

3 - 12 Lacs

Gurgaon

On-site

Design and develop robust backend solutions using Java, Spring Boot, and Microservices architecture Build dynamic user interfaces using React.js or Angular Integrate with AWS cloud services such as EC2, S3, Lambda, RDS, API Gateway, SNS/SQS, CloudFormation, etc. Work on API design (RESTful), authentication, and performance optimization Ensure code quality through unit testing, integration testing, and code reviews Collaborate with DevOps teams to manage CI/CD pipelines and cloud deployments Required Skills Strong programming skills in Java, with hands-on experience in Spring Boot and Microservices Proficient in JavaScript and modern front-end frameworks: React.js or Angular Solid hands-on experience in AWS services including but not limited to:EC2, S3, Lambda, RDS, CloudWatch, CloudFormation, ECS, EKS, IAM Experience with RESTful APIs, JSON, and third-party integrations Good understanding of CI/CD pipelines, Git, and containerization tools like Docker/Kubernetes Familiarity with databases: MySQL, PostgreSQL, DynamoDB, or MongoDB Job Type: Full-time Pay: ₹30,000.00 - ₹100,000.00 per month Work Location: On the road

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai

On-site

Job Requirements Job Details Development/configuration of new API on Apigee API management platform Strong in reviewing the specification files/swagger files. Have an active enthusiasm in automation areas Policy , credentials and any other administration of existing API. Troubleshoot priority incidents, facilitate blameless post-mortems and ensure permanent closure of incidents. Engage with development team throughout the life cycle in ensuring minimal refactoring or changes. Participate in the 24x7 support coverage as needed. Have an enthusiastic, go-for-it attitude. When you see something broken, you can't help but fix it. Have an urge to collaborate and communicate asynchronously. Have an urge for delivering quickly and iterating fast. Strong in basics B asic Qualifications: Bachelor’s degree, preferably in Computer Science, Software Engineering, or any other Engineering field. 3+ years with Apigee API management platform with AWS expertise (will be plus) . Technical Experience: Knowledge on Key AWS services : EC2, S3, VPC, Route 53, RDS, CloudFormation, EC2, DynamoDB (NoSQL), Lambda, logging/CloudWatch, IAM, Certificate Manager, ELB, EBS, ECS, CloudFront/WAF, SQS, SNS, SES. Experience with identifying API from business processes design and implementing API using latest and emerging technology. Working knowledge of API security certification, authentication, authorization, IP security setup, and end point configuration. Hands on experience with building service policies using various Policy Assertions that implement XML/JSON transformation, routing, encryption/decryption, dig. signatures, auditing, PKI, threat prevention, ICAP integration, logical assertions, etc. This includes hands on experience with Access Control, TLS, XML Security, Message Validation/Transformation, Routing, Policy Logic and Threat Protection Policy assertion categories. Hands on any modern API Gateways, preferably Apigee. Knowledge of REST best practices preferred. Understanding of Git, Bitbucket, Jira, Jenkins, Sonar, Splunk, Maven, AIM and/ or Continuous Delivery tools. Knowledge of at least one modern programming language such as: Java, C#, C++, Perl, or Python. Responsibilities: Meeting SLO, SLA, SLI’s defined in the Operations model. Setting task prioritization and troubleshoot to closure of incidents. Participate on-call /on-rotation. Improve Service observability. Proactively testing the flexibility and resilience of the system.

Posted 1 month ago

Apply

6.0 years

20 Lacs

Ahmedabad

On-site

About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies – in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences – to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Job Summary: We are looking for a highly skilled Senior Java Developer with deep expertise in Java programming, microservices architecture, OOPs principles, and Spring Boot framework , coupled with strong working experience in AWS cloud platforms . The ideal candidate will also bring in-depth understanding of Healthcare Integration Architecture , . Key Responsibilities: Design, develop, and deploy scalable and high-performance Java-based microservices . Implement and maintain Spring Boot applications adhering to OOPs principles and industry best practices. Collaborate with architecture and DevOps teams to build and deploy services on AWS Cloud using services like EC2, Lambda, S3, ECS, EKS, etc. Translate complex business requirements into scalable technical solutions. Ensure application performance, security, and scalability by conducting regular code reviews and performance tuning. Participate in Agile/Scrum ceremonies, and work closely with cross-functional teams including QA, DevOps, Product Managers, and Solution Architects. Required Skills & Experience: 6+ years of hands-on experience in Java development with strong understanding of OOPs concepts . 4+ years of experience in Spring Boot, Spring Cloud, and other Spring modules . Solid experience in building RESTful APIs and Microservices architecture. Experience working with AWS cloud services including ECS/EKS, Lambda, SQS, API Gateway, RDS, DynamoDB, S3, etc. Experience with CI/CD pipelines using Jenkins, GitHub Actions, or similar tools. Strong debugging, troubleshooting, and problem-solving skills. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Certification in AWS (e.g., AWS Developer Associate or Solutions Architect) is a plus. Prior experience in HealthTech or working with US-based healthcare clients . Soft Skills: Excellent communication and interpersonal skills. Strong analytical and critical thinking abilities. Ability to work independently and within a team in a fast-paced environment. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 o Term Insurance and Accident Insurance o Paid Holidays & Earned Leaves o Paid Parental LeaveoLearning & Career Development o Employee Wellness Job Type: Full-time Pay: From ₹2,000,000.00 per year Benefits: Health insurance Location Type: In-person Schedule: Day shift Monday to Friday Application Question(s): How many years of experience do you have? Are you serving notice period ? How many days are left from your notice period ? Work Location: In person Speak with the employer +91 9723681027

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Full Stack Developer (MERN Stack) Location: Noida 5 Days WFO Who we are: LUMIQ is the leading Data and Analytics company in the Financial Services and Insurance (FSI) industry. We are trusted by the world's largest FSIs, including insurers, banks, AMCs, and NBFCs, to address their data challenges. Our clients include 40+ enterprises with over $10B in deposits/AUM, collectively representing about 1B customers globally. Our expertise lies in creating next-gen data technology products to help FSI enterprises organize, manage, and effectively use data. We have consistently challenged the status quo, introducing many industry-firsts like the first enterprise data platform in Asia on cloud for a regulated entity. Founded in 2013, LUMIQ has now completed a decade of innovation, backed by Info Edge Ventures (a JV between Temasek Holdings of Singapore and Naukri) and US-based Season 2 Ventures. Our Culture: At LUMIQ, we strive to create a community of passionate data professionals who aim to transcend the usual corporate dynamics. We offer you the freedom to ideate, commit, and navigate your career trajectory at your own pace. Culture of ownership – empowerment to drive outcomes. Our culture encourages 'Tech Poetry' – combining creativity and technology to create solutions that revolutionize industry. We trust our people to manage their responsibilities with minimal policy constraints. Our team is composed of the industry's brightest minds, from PhDs and engineers to industry specialists from Banking, Insurance, NBFCs, AMCs, who will challenge and inspire you to reach new heights. Job Description: The Full Stack / Applications team is one of the core technology teams of Lumiq.ai and is responsible for creating all the data-driven applications scalable across any amount of users, and processing. The team also interacts with our customers to create technical architectures and deliver the products and solutions they need. If you are someone who is always contemplating new and unusual ways of to optimize user experience, how technologies can interact, and how various tools, technologies & concepts can help a customer drive analyse data, then Lumiq is the place of opportunities you are looking for. Who are you? Results driven Self-driven- don't wait for others to push Continuous learner - hungry to learn and experiment Solution focused - hunts for problems and focuses on solving them in timely and effective manner; ability to anticipate and re-align Trustworthy - someone to lean & count on Committed - figures out how to get something done Logical & Street Smart with critical thinking skills; Flexible - ready for adjustment for success of assignments Empathetic - good listener and being able to put oneself into others shoes How Full Stack Software Developer spends a day here: Helps in designing end to end architecture. Interacts with Front End Engineer to discuss the requirements of interfaces Talks with DevOps Engineer to make a scalable system Works on different languages and frameworks to create scalable applications. Plays with GIT, managing multiple projects in multiple branches. Researching new technologies, proving the concepts and planning how to integrate or update Requirements Eligibility Experience: At least 3 years of experience as a Full Stack engineer or in a similar role Technical expertise with data structures, UI optimization techniques, scalability and algorithms. Some experience in handling customer interactions would be a plus. Must Have Skills: Java script/Type script (React) NodeJS Mongo DB/PostgreSQL/MySQL/SQL Server or any other RDBMS or NoSQL database Git Good to have Skills Amazon Web Services (AWS) - S3, EC2, Lambda, SQS, SES or any other cloud services Linux CI/CD pipeline Docker What Do You Get: Opportunity to contribute to an entrepreneurial culture and exposure to the startup hustler culture. Competitive Salary Packages. Group Medical Policies. Equal Employment Opportunity. Maternity Leave. Opportunities for upskilling and exposure to the latest technologies. 100% Sponsorship for certification.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies