Jobs
Interviews

864 Lambda Expressions Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

6 - 11 Lacs

Hyderabad

Work from Office

As a MERN Stack Developer with 4+ years of experience, you will play a crucial role in designing, developing, and implementing web applications using the MERN (MongoDB, Express.js, React.js, Node.js) stack. You will be responsible for the entire software development lifecycle, from requirements gathering to deployment and maintenance. Your expertise in front-end and back-end development will be essential in building scalable, efficient, and user-friendly applications. Responsibilities: Collaborate with cross-functional teams, including designers, product managers, and other developers, to gather and understand project requirements. Design and develop high-quality, scalable, and efficient web applications using the MERN stack. Develop and implement front-end components using React.js, ensuring a responsive and user-friendly interface. Build RESTful APIs and server-side applications using Node.js and Express.js. Create and maintain databases using MongoDB, ensuring data integrity and performance. Write efficient and reusable code while adhering to best practices and coding standards. Conduct thorough testing and debugging of applications to identify and fix any issues or bugs. Optimize applications for maximum speed and scalability. Collaborate with DevOps teams to deploy applications and ensure seamless integration with the production environment. Stay up to date with emerging technologies and industry trends and provide recommendations for improvements in the development process. Mentor and provide guidance to junior developers, sharing your knowledge and expertise. Requirements: Bachelors degree in computer science, Software Engineering, or a related field. Proven experience as a MERN Stack Developer with at least 5 years of professional experience. Strong proficiency in JavaScript, HTML, CSS, and related front-end technologies. Extensive experience with React.js and its ecosystem (Redux, React Router, etc.). In-depth knowledge of server-side development using Node.js and Express.js. Experience with MongoDB or other NoSQL databases, including data modeling and querying. Solid understanding of RESTful APIs and experience in building them. Familiarity with version control systems (e.g., Git) and agile development methodologies. Strong problem-solving skills and attention to detail. Ability to work independently and in a team environment, handling multiple projects and deadlines. Excellent communication and collaboration skills. Experience with cloud platforms (e.g., AWS, Azure) and containerization technologies (e.g., Docker) is a plus. Good to have knowledge of Socket.io or the Net module of Node.js. Good to have knowledge of Redux and side effects in Libraries like redux-saga or redux-thunk. Good to have knowledge of Redis DB. Good to have knowledge on build tools like Webpack, gulp, grunt, and CI/CD. Good to have knowledge of AWS services like EC2, S3, and Lambda functions.

Posted 1 month ago

Apply

6.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Your Role Knowledge in Cloud Computing by using AWS Services like Glue, Lamda, Athena, Step Functions, S3 etc. Knowledge in programming language Python/Scala. Knowledge in Spark/PySpark (Core and Streaming) and hands-on to transform using Streaming. Knowledge building real time or batch ingestion and transformation pipelines. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Your Profile Working experience and strong knowledge in Databricks is a plus. Analyze existing queries for performance improvements. Develop procedures and scripts for data migration. Provide timely scheduled management reporting. Investigate exceptions regarding asset movements. What will you love working at Capgemini Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you canbring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internalsports events , yoga challenges, or marathons. Capgemini serves clients across industries, so you may get to work on varied data engineering projects involving real-time data pipelines, big data processing, and analytics. You'll work extensively with AWS services like S3, Redshift, Glue, Lambda, and more.

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 16 Lacs

Hyderabad, Pune, Chennai

Work from Office

Your Role 5+ years of experience implementing and supporting following Enterprise Planning & Budgeting Cloud Services (EPBCS) modules - Financials, Workforce, Capital, and Projects Experience in Enterprise Data Management Consolidation (EDMCS) Enterprise Profitability & Cost Management Cloud Services (EPCM) Oracle Integration cloud (OIC). 1+ full life cycle Oracle EPM Cloud Implementation. Experience in creating forms, OIC Integrations, and complex Business Rules. Understand dependencies and interrelationships between various components of Oracle EPM Cloud. Keep abreast of Oracle EPM roadmap and key functionality to identify opportunities where it will enhance the current process within the entire Financials ecosystem. Proven ability to collaborate with internal clients in an agile manner, leveraging design thinking approaches. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Your Profile Proven ability to collaborate with internal clients in an agile manner, leveraging design thinking approaches. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Experience of Python, AWS Cloud (Lambda, Step functions, EventBridge etc.) is preferred. What you love about capgemini You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work.You will have the opportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you can bring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal sports events , yoga challenges, or marathons. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. About Capgemini Location - Hyderabad,Pune,Chennai,Bengaluru,Mumbai

Posted 1 month ago

Apply

2.0 - 4.0 years

2 - 6 Lacs

Mumbai, Hyderabad, Pune

Work from Office

Your Profile Strong experience with Amazon Lex and Amazon Connect. Proficiency in AWS services such as Lambda, CloudWatch, DynamoDB, S3, IAM, and API Gateway. Programming knowledge in Node.js or Python. Experience with contact center workflows and customer experience design. Understanding of NLP and conversational design best practices. Familiarity with CI/CD processes and tools in the AWS ecosystem. Your Role Design, develop, and maintain voice and chatbots using Amazon Lex. Integrate Lex bots with Amazon Connect for seamless customer experience. Configure Amazon Connect contact flows, queues, routing profiles, and Lambda integrations. Develop and deploy AWS Lambda functions for backend logic and Lex fulfillment. Collaborate with cross-functional teams to understand requirements and deliver effective solutions. Monitor and optimize bot performance, ensuring high availability and responsiveness. Troubleshoot and resolve issues related to AWS services and integrations What youll love about working here You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you can bring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal sports events , yoga challenges, or marathons. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. About Capgemini Location - Pune,Mumbai,Hyderabad,Chennai,Bengaluru

Posted 1 month ago

Apply

1.0 - 3.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Job Overview: We are looking for a highly skilled AppDynamics Consultant with a strong background in application performance monitoring, cloud-native technologies, and end-to-end observability. The ideal candidate will have hands-on experience with AppDynamics components, instrumentation techniques, and certified expertise. You will be responsible for enabling proactive monitoring solutions in hybrid and multi-cloud environments, working closely with cross-functional teams, including SRE, DevOps, and Engineering. Key Responsibilities: Lead the end-to-end implementation of AppDynamics across enterprise applications and cloud workloads. Instrument and configure AppDynamics agents for Java, .NET, Node.js, PHP, Python, and database tiers. Design and deploy Application Flow Maps, Business Transactions, Health Rules, and Policies. Create and maintain custom dashboards, analytics queries, and synthetic monitoring scripts. Develop SLIs/SLOs and integrate them into AppDynamics Dash Studio and external observability platforms. Tune performance baselines, anomaly detection, and alert thresholds. Collaborate with Cloud Architects and SRE teams to align monitoring with cloud-native best practices. Provide technical workshops, knowledge transfer sessions, and documentation for internal and external stakeholders. Integrate AppDynamics with CI/CD pipelines, incident management tools (e.g., ServiceNow, PagerDuty), and cloud-native telemetry. Required AppDynamics Expertise: Strong hands-on experience in: Controller administration (SaaS or On-Prem) Agent configuration for APM, Infrastructure Visibility, Database Monitoring, and End-User Monitoring. Analytics and Business iQ Service endpoints, data collectors, and custom metrics Experience in AppDynamics Dash Studio and Advanced Dashboards Deep understanding of transaction snapshots, call graphs, errors, and bottleneck analysis Knowledge of AppDynamics APIs for automation and custom integrations Ability to troubleshoot agent issues, data gaps, and controller health Mandatory Cloud & DevOps Skills: Hands-on experience with at least one major cloud platform: AWS (EC2, ECS/EKS, Lambda, CloudWatch, CloudFormation) Azure (App Services, AKS, Functions, Azure Monitor) GCP (GKE, Compute Engine, Cloud Operations Suite) Experience in containerized environments (Kubernetes, Docker) Familiarity with CI/CD pipelines (Jenkins, GitLab, GitHub Actions) Scripting skills (Shell, Python, or PowerShell) for automation and agent deployment Experience with Infrastructure as Code (Terraform, CloudFormation) Preferred Skills: Integration with OpenTelemetry, Grafana, Prometheus, or Splunk Experience with Full-Stack Monitoring (APM + Infra + Logs + RUM + Synthetic) Knowledge of Service Reliability Engineering (SRE) practices and error budgets Familiarity with ITSM tools and alert routing mechanisms Understanding of business KPIs and mapping them to technical metrics Certifications (Preferred or Required): AppDynamics Certified Associate / Professional / Implementation Professional Cloud certifications: AWS Certified Solutions Architect / DevOps Engineer Microsoft Certified: Azure Administrator / DevOps Engineer Google Associate Cloud Engineer / Professional Cloud DevOps Engineer Kubernetes (CKA/CKAD) or equivalent is a plus Education: Bachelor s degree in computer science, IT, or related field Master s Degree (optional but preferred)

Posted 1 month ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Job Overview: We are looking for a highly skilled AppDynamics Consultant with a strong background in application performance monitoring, cloud-native technologies, and end-to-end observability. The ideal candidate will have hands-on experience with AppDynamics components, instrumentation techniques, and certified expertise. You will be responsible for enabling proactive monitoring solutions in hybrid and multi-cloud environments, working closely with cross-functional teams, including SRE, DevOps, and Engineering. Key Responsibilities: Lead the end-to-end implementation of AppDynamics across enterprise applications and cloud workloads. Instrument and configure AppDynamics agents for Java, .NET, Node.js, PHP, Python, and database tiers. Design and deploy Application Flow Maps, Business Transactions, Health Rules, and Policies. Create and maintain custom dashboards, analytics queries, and synthetic monitoring scripts. Develop SLIs/SLOs and integrate them into AppDynamics Dash Studio and external observability platforms. Tune performance baselines, anomaly detection, and alert thresholds. Collaborate with Cloud Architects and SRE teams to align monitoring with cloud-native best practices. Provide technical workshops, knowledge transfer sessions, and documentation for internal and external stakeholders. Integrate AppDynamics with CI/CD pipelines, incident management tools (e.g., ServiceNow, PagerDuty), and cloud-native telemetry. Required AppDynamics Expertise: Strong hands-on experience in: Controller administration (SaaS or On-Prem) Agent configuration for APM, Infrastructure Visibility, Database Monitoring, and End-User Monitoring. Analytics and Business iQ Service endpoints, data collectors, and custom metrics Experience in AppDynamics Dash Studio and Advanced Dashboards Deep understanding of transaction snapshots, call graphs, errors, and bottleneck analysis Knowledge of AppDynamics APIs for automation and custom integrations Ability to troubleshoot agent issues, data gaps, and controller health Mandatory Cloud & DevOps Skills: Hands-on experience with at least one major cloud platform: AWS (EC2, ECS/EKS, Lambda, CloudWatch, CloudFormation) Azure (App Services, AKS, Functions, Azure Monitor) GCP (GKE, Compute Engine, Cloud Operations Suite) Experience in containerized environments (Kubernetes, Docker) Familiarity with CI/CD pipelines (Jenkins, GitLab, GitHub Actions) Scripting skills (Shell, Python, or PowerShell) for automation and agent deployment Experience with Infrastructure as Code (Terraform, CloudFormation) Preferred Skills: Integration with OpenTelemetry, Grafana, Prometheus, or Splunk Experience with Full-Stack Monitoring (APM + Infra + Logs + RUM + Synthetic) Knowledge of Service Reliability Engineering (SRE) practices and error budgets Familiarity with ITSM tools and alert routing mechanisms Understanding of business KPIs and mapping them to technical metrics Certifications (Preferred or Required): AppDynamics Certified Associate / Professional / Implementation Professional Cloud certifications: AWS Certified Solutions Architect / DevOps Engineer Microsoft Certified: Azure Administrator / DevOps Engineer Google Associate Cloud Engineer / Professional Cloud DevOps Engineer Kubernetes (CKA/CKAD) or equivalent is a plus Education: Bachelors degree in computer science, IT, or related field Masters Degree (optional but preferred)

Posted 1 month ago

Apply

3.0 - 8.0 years

13 - 17 Lacs

Gurugram

Work from Office

Project Role : Security Architect Project Role Description : Define the cloud security framework and architecture, ensuring it meets the business requirements and performance goals. Document the implementation of the cloud security controls and transition to cloud security-managed operations. Must have skills : DevOps Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :Seeking a results-driven DevSecOps Engineer with deep experience in cloud security automation, particularly with Wiz, AWS, Terraform, and CI/CD pipelines. This role demands a strong background in security policy implementation, cloud infrastructure management, and automation of security and compliance workflows. Roles & Responsibilities:-Developed custom REGO policies within Wiz Cloud to enforce security standards across AWS infrastructure, with a focus on Terraform and CloudFormation templates.-Ensured continuous compliance through proactive, automated security checks integrated into CI/CD pipelines.-Built an end-to-end automated compliance pipeline using Python and GitHub Actions, enabling real-time alerts for policy violations.-Integrated compliance updates directly into Confluence, improving visibility and reducing response time across teams.-Automated onboarding of GitHub repositories into Wiz, using GitHub APIs to extract data (teams, users, repos) and transform it into Terraform-compatible variables for streamlined policy enforcement.-Designed a Lambda-based automation framework to monitor Wiz CCR release notes, detect high-severity changes, and notify stakeholders via SNS, SQS, and JIRA tickets.-Maintained Confluence documentation dynamically for transparent and traceable change management.-Replaced legacy workflows (Power Automate & Jira) with a Selenium + Java-based automation framework for managing Wiz CCRs, enabling scalable, testable automation for rule creation and updates.-Hands-on experience with Amazon EC2 and RDS, including provisioning, hardening, patching, and monitoring.-Automated infrastructure tasks related to EC2 and RDS lifecycle via Terraform and CI/CD integration.-Used Postman extensively for validating Wiz APIs, GitHub APIs, and internal tools.-Created collections and automated test scripts for integration testing of security workflows. Professional & Technical Skills: -Cloud Security:Wiz, Rego Policies -Cloud Platforms:AWS (EC2, RDS, Lambda, SNS, SQS) -IaC:Terraform, CloudFormation-Automation & CI/CD:GitHub Actions, Python, Selenium (Java)-DevOps & Integration:GitHub API, Postman, JIRA, Confluence -Scripting:Python, Shell, Java -Proven ability to automate cloud security processes using modern DevOps tools. -Strong problem-solving skills and ability to design scalable automation frameworks. -Experience working with regulated environments and security compliance standards (e.g., CIS, NIST, ISO 27001) is a plus. Additional Information:- The candidate should have minimum 3 years of experience in DevOps.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 14 Lacs

Kolkata

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Python (Programming Language) Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a proactive and technically proficient Application Lead to manage and guide a team of developers in designing, building, and deploying scalable cloud-based applications. This role requires deep expertise in Python and AWS, a strong grasp of application architecture, and experience leading agile development teams. The ideal candidate will combine hands-on development skills with leadership capabilities to drive end-to-end delivery of complex technical solutions. Roles & Responsibilities:Lead a team of developers and analysts in delivering high-quality Python-based cloud applications.Architect and design scalable, maintainable, and secure AWS solutions in alignment with business needs.Act as a technical liaison between stakeholders, architects, and development teams.Ensure adherence to coding standards, best practices, and project timelines.Drive sprint planning, task estimation, and agile ceremonies to ensure timely delivery.Mentor junior team members and conduct regular performance and code reviews.Oversee CI/CD pipeline setup, monitoring, and deployment in a DevOps environment.Identify and mitigate technical risks and issues proactively. Professional & Technical Skills: 1. Languages & Frameworks:Python (Flask, Django, FastAPI); understanding of JavaScript, Angular (optional)2. Cloud Expertise:Strong experience with AWS (EC2, Lambda, S3, RDS, CloudFormation, API Gateway, IAM)3. Architecture:Proven experience designing cloud-native, microservices-based architectures4. DevOps & CI/CD:Git, Jenkins, Docker, CloudFormation/Terraform, CodePipeline, CloudWatch5. Project Management:Agile/Scrum methodology; JIRA or equivalent tools6. Soft Skills: Excellent communication, leadership, problem-solving, and stakeholder management skills Additional Information:1. 8+ years of experience in application development, with at least 23 years in a technical leadership role2. AWS certification (e.g., AWS Solutions Architect Associate) is a plus3. Experience working with geographically distributed teams is desirable4. Willingness to take ownership and accountability for delivery outcomes Qualification 15 years full time education

Posted 1 month ago

Apply

6.0 - 10.0 years

6 - 7 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Experience: 6 + years Expected Notice Period: 15 Days Shift: (GMT+05:30) Asia/Kolkata (IST) Opportunity Type: Remote,New Delhi,Bengaluru,Mumbai Technical Lead What youll own Leading the re-architecture of Zooms database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rolloutall with aggressive timelines Skills & Experience We Expect Were looking for candidates with 7-10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3-4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3-5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1-3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2-4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3-5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2-3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps skills -MAM, App integration

Posted 1 month ago

Apply

8.0 - 11.0 years

10 - 15 Lacs

Chennai

Work from Office

8 to 11 years of experience in a software engineering role, with a focus on backend or full-stack development Proven track record of AI/LLM application development or integration Strong experience in Python-based AI application development with API engineering Proficiency in RESTful APIs, microservices, and cloud-based AI deployments (AWS, Kubernetes, Lambda) Familiarity with AI orchestration tools for AI workflow automation Knowledge of SQL and NoSQL databases (PostgreSQL) for AI-powered search Experience working in Agile teams and delivering AI-driven features in a cloud-first environment Bachelors Degree in Computer Science or related field Understanding of healthcare data privacy regulations (HIPAA, GDPR) is a plus BEHAVIORS ABILITIES REQUIRED Ability to learn and adapt rapidly while producing high-quality code Capable of translating AI/LLM concepts into practical, scalable software solutions Innovative thinker who finds creative ways to execute when historical context is limited Strong analytical skills to assess potential designs and choose the best solution for the business Committed to delivering results under challenging circumstances Skilled at mentoring and coaching to elevate junior team members Able to uphold best engineering practices for quality, security, and performance RESPONSIBILITIES MAY INCLUDE, BUT ARE NOT LIMITED TO Technical Execution AI-Powered Software Development API Integration Design, develop, and deploy AI-powered applications that enhance RCM automation Develop AI-driven microservices and ensure cloud-native deployment. AI Optimization Performance Tuning Optimize AI model performance via API configurations rather than custom fine-tuning Leverage AI orchestration tools (LangChain) to automate complex AI workflows Microservices APIs Build and maintain RESTful APIs for AI features; integrate with internal and external systems Accurately estimate development tasks and own them through completion Deliver high-quality software components Ensure solutions meet reliability, performance, and compliance standards (especially for healthcare data) Evaluate and propose new technologies Identify scalable open-source frameworks or cloud-based AI services, ensuring robust and cost-effective implementations Code reviews quality assurance Participate in peer reviews, ensuring adherence to coding conventions and best practices Write, debug, and deploy code to production, promptly delivering fixes Contributions to the Team Subject matter expert Serve as a go-to resource for AI/LLM-related application architecture and best practices Stay current with industry trends (agentic AI, genAI) and share insights with the broader team Scrum team participation Collaborate in Agile ceremonies: daily stand-ups, sprint planning, retrospectives Commit to sprint goals and deliver incremental value to customers and internal stakeholders Team accountability Encourage a culture of ownership: if you build it, you support it post-release Help the team continuously improve velocity, code quality, and automation Cross-Functional Coordination Communication Partner with product and UX Translate requirements for AI-driven RCM features into technical designs, ensuring alignment with user needs Collaborate on user experience improvements requiring generative AI insights (e.g., claims code suggestions) Stakeholder engagement Work closely with compliance/security teams to maintain HIPAA/data governance standards Communicate technical roadmaps, dependencies, and timelines effectively to non-technical audiences Broad knowledge sharing Educate peers on AI/ML design patterns, cloud infrastructures, and best practices Build strong relationships with cross-functional teams, bridging technology and business domains

Posted 1 month ago

Apply

3.0 - 5.0 years

20 - 25 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

Expected Notice Period: 30 Days Shift: (GMT+11:00) Australia/Melbourne (AEDT) Opportunity Type: Remote Placement Type: Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Okkular) What do you need for this opportunity? Must have skills required: Communication Skills, problem-solvers, Agentic AI, AWS services (Lambda, FastAI, LangChain, Large Language Model (LLM), Natural Language Processing (NLP), PyTorch, Sagemaker, Step Functions), Go Lang, Python Okkular is Looking for: About the job Company Description: We are a leading provider of fashion e-commerce solutions, leveraging generative AI to empower teams with innovative tools for merchandising and product discovery. Our mission is to enhance every product page with engaging customer-centric narratives, propelling accelerated growth and revenue generation. Join us in shaping the future of online fashion retail through cutting-edge technology and unparalleled creativity within the Greater Melbourne Area. Role Description: This is a full-time remote working position in India as a Senior AI Engineer . The Senior AI Engineer will be responsible for pattern recognition, neural network development, software development, and natural language processing tasks on a daily basis. Qualifications: Proficiency in sklearn, PyTorch, and fastai for implementing algorithms and training/improving models. Familiarity with Docker, AWS cloud services like Lambda, SageMaker, Bedrock. Familiarity with Streamlit. Knowledge of LangChain, LlamaIndex, Ollama, OpenRouter, and other relevant technologies. Expertise in pattern recognition and neural networks. Experience in Agentic AI development. Strong background in Computer Science and Software Development. Knowledge of Natural Language Processing (NLP). Ability to work effectively in a fast-paced environment and collaborate with cross-functional teams. Strong problem-solving skills and attention to detail. Masters or PhD in Computer Science, AI, or a related field is preferred, but not mandatory. Strong experience in the field is sufficient alternative. Prior experience in fashion e-commerce is advantageous. Languages: Python, Golang Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 2:30 PM IST to 11:30 PM IST How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview!

Posted 1 month ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Job_Description":" Role: Python/Data Engineer: Level Expected: 4-9 yrs Must Haves: 1. Good analytical and problem-solving skills. 2. Good hands-on experience in developing Python programs with Python 3.10 version 3. Familiarity with Python frameworks Django, Flask etc. 4. Good knowledge of Database technologies like RDBMS, MongoDB, Hibernate etc. 5. Good knowledge of REST API, creation and consumption 6. Basic knowledge of AWS services - EC2, S3, Lambda function, ALB 7. Familiarity with Git, JIRA and other dev tools Good to have: 1. Hands on experience with AWS services, mainly Lambda function creation. 2. Basic knowledge of Databricks ","

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 13 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote What do you need for this opportunity? Must have skills required: ML, Python Looking for: Were looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI while integrating visual AI pipelines built by ML engineers. Youll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You wont be working in a silo this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What Youll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables you write clean, tested code and debug edge cases until theyre truly fixed The ability to frame problems from scratch and work without strict handoffs you build from a goal, not a ticket. Skills & Experience We Expect Core Engineering Experience 6-8 years of professional software engineering experience in production environments 2-3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5-7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) 2+ years System Optimization & Middleware (3-5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2-3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4-6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2-3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster.

Posted 1 month ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Mumbai, Bengaluru, Delhi

Work from Office

Must have skills required : Java, Groovy, SQL, AWS, Data Engineering, Agile, Database Good to have skills : Machine Learning, Python, CI/CD, MicroServices, Problem Solving Intro and job overview: As a Senior Software Engineer II, you will join a team working with next gen technologies on geospatial solutions in order to identify areas for future growth, new customers and new markets in the Geocoding data integrity space. You will be working on the distributed computing platform in order to migrate existing geospatial datasets creation process-and bring more value to Preciselys customers and grow market share. Responsibilities and Duties: You wil be working on the distributing computing platform to migrate the existing Geospatial data processes including sql scripts,groovy scripts. Strong Concepts in Object Oriented Programming and development languages, Java, including SQL, Groove/Gradle/maven You will be working closely with Domain/Technical experts and drive the overall modernization of the existing processes. You will be responsible to drive and maintain the AWS infrastructures and other Devops processes. Participate in design and code reviews within a team environment to eliminate errors early in the development process. Participate in problem determination and debugging of software product issues by using technical skills and tools to isolate the cause of the problem in an efficient and timely manner. Provide documentation needed to thoroughly communicate software functionality. Present technical features of product to customers and stakeholders as required. Ensure timelines and deliverables are met. Participate in the Agile development process. Requirements and Qualifications: UG - B.Tech/B.E. OR PG M.S. / M.Tech in Computer Science, Engineering or related discipline At least 5-7 years of experience implementing and managing geospatial solutions Expert level in programming language Java, Python. Groovy experience is preferred. Expert level in writing optimized SQL queries, procedures, or database objects to support data extraction, manipulation in data environment Strong Concepts in Object Oriented Programming and development languages, Java, including SQL, Groovy/Gradle/maven Expert in script automation in Gradle and Maven. Problem Solving and Troubleshooting Proven ability to analyze and solve complex data problems, troubleshoot data pipelines issues effectively Experience in SQL, database warehouse and data engineering concepts Experience with AWS platform provided Big Data technologies (IAM, EC2, S3, EMR, RedShift, Lambda, Aurora, SNS, etc.) Strong analytical, problem-solving, data analysis and research Good knowledge of Continuous Build Integration (Jenkins and Gitlab pipeline) Experience with agile development and working with agile engineering teams Excellent interpersonal skills Knowledge on micro services and cloud native framework. Knowledge of Machine Learning / AI. Knowledge on programming language Python.

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

About the Opportunity Job TypeApplication 31 July 2025 Strategic Impact As a Senior Data Engineer, you will directly contribute to our key organizational objectives: Accelerated Innovation Enable rapid development and deployment of data-driven products through scalable, cloud-native architectures Empower analytics and data science teams with self-service, real-time, and high-quality data access Shorten time-to-insight by automating data ingestion, transformation, and delivery pipelines Cost Optimization Reduce infrastructure costs by leveraging serverless, pay-as-you-go, and managed cloud services (e.g., AWS Glue, Databricks, Snowflake) Minimize manual intervention through orchestration, monitoring, and automated recovery of data workflows Optimize storage and compute usage with efficient data partitioning, compression, and lifecycle management Risk Mitigation Improve data governance, lineage, and compliance through metadata management and automated policy enforcement Increase data quality and reliability with robust validation, monitoring, and alerting frameworks Enhance system resilience and scalability by adopting distributed, fault-tolerant architectures Business Enablement Foster cross-functional collaboration by building and maintaining well-documented, discoverable data assets (e.g., data lakes, data warehouses, APIs) Support advanced analytics, machine learning, and AI initiatives by ensuring timely, trusted, and accessible data Drive business agility by enabling rapid experimentation and iteration on new data products and features Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics Be accountable for technical delivery and take ownership of solutions Lead a team of senior and junior developers providing mentorship and guidance Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress Drive technical innovation within the department to increase code reusability, code quality and developer productivity Challenge the status quo by bringing the very latest data engineering practices and techniques About youCore Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance OptimizationExperience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion Experience using orchestration tools (Airflow, Control-M, etc...) Significant experience in software engineering practices using GitHub, code verification, validation, and use of copilots Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes Strong experience in API development using Python based frameworks like FastAPI Key Soft Skills: Problem-SolvingLeadership experience in problem-solving and technical decision-making. CommunicationStrong in strategic communication and stakeholder engagement. Project ManagementExperienced in overseeing project lifecycles working with Project Managers to manage resources.

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 10 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team, you will design, build, and optimize enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. Youll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. Whats in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical : 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestrationDocker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworksCelery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practicesunit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)

Posted 1 month ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 09 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will build and maintain enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. Whats in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical : Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestrationCelery, Redis, Airflow Strong AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestrationDocker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)

Posted 1 month ago

Apply

8.0 - 12.0 years

22 - 27 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 12 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. Youll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. Whats in it for you Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What Were Looking For 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group)

Posted 1 month ago

Apply

3.0 - 7.0 years

3 - 7 Lacs

Mohali

Work from Office

The Cloud Computing Training Expert will be responsible for delivering high-quality training sessions, developing curriculum, and guiding students toward industry certifications and career opportunities. Key Responsibilities 1. Training Delivery Design, develop, and deliver high-quality cloud computing training through courses, workshops, boot camps, and webinars. Cover a broad range of cloud topics, including but not limited to: Cloud Fundamentals (AWS, Azure, Google Cloud) Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Serverless Computing Cloud Security, Identity & Access Management (IAM), Compliance DevOps & CI/CD Pipelines (Jenkins, Docker, Kubernetes, Terraform, Ansible) Networking in the Cloud, Virtualization, and Storage Solutions Multi-cloud Strategies & Cost Optimization 2. Curriculum Development Develop and continuously update training materials, hands-on labs, and real-world projects. Align curriculum with cloud certification programs (AWS Certified Solutions Architect, Azure Administrator, Google Cloud Professional, etc.). 3. Training Management Organize and manage cloud computing training sessions, ensuring smooth delivery and active student engagement. Track student progress and provide guidance, feedback, and additional learning resources. 4. Technical Support & Mentorship Assist students with technical queries and troubleshooting related to cloud platforms. Provide career guidance, helping students pursue cloud certifications and job placements in cloud computing and DevOps roles. 5. Industry Engagement Stay updated on emerging cloud technologies, trends, and best practices. Represent ASB at cloud computing conferences, industry events, and tech forums. 6. Assessment & Evaluation Develop and administer hands-on labs, quizzes, and real-world cloud deployment projects. Evaluate learner performance and provide constructive feedback. Required Qualifications & Skills > Educational Background Bachelors or Masters degree in Computer Science, Information Technology, Cloud Computing, or a related field. > Hands-on Cloud Experience 3+ years of experience in cloud computing, DevOps, or cloud security roles. Strong expertise in AWS, Azure, and Google Cloud, including cloud architecture, storage, and security. Experience in Infrastructure as Code (IaC) using Terraform, CloudFormation, or Ansible. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines. > Teaching & Communication Skills 2+ years of experience in training, mentoring, or delivering cloud computing courses. Ability to explain complex cloud concepts in a clear and engaging way. > Cloud Computing Tools & Platforms Experience with AWS services (EC2, S3, Lambda, RDS, IAM, CloudWatch, etc.). Hands-on experience with Azure and Google Cloud solutions. Familiarity with DevOps tools (Jenkins, GitHub Actions, Kubernetes, Docker, Prometheus, Grafana, etc.). > Passion for Education A strong desire to train and mentor future cloud professionals. Preferred Qualifications > Cloud Certifications (AWS, Azure, Google Cloud) AWS Certified Solutions Architect, AWS DevOps Engineer, Azure Administrator, Google Cloud Professional Architect or a similar architecture. > Experience in Online Teaching Prior experience in delivering online training (Udemy, Coursera, or LMS platforms). > Knowledge of Multi-Cloud & Cloud Security Understanding of multi-cloud strategies, cloud cost optimization, and cloud-native security practices. > Experience in Hybrid Cloud & Edge Computing Familiarity with hybrid cloud deployment, cloud automation, and emerging edge computing trends.

Posted 1 month ago

Apply

3.0 - 6.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Title Cloud Engineer for Data Platform group Department Enterprise Engineering Location Bengaluru Level 5 Technical Consultant Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our Data Platform team and feel like you are part of something bigger. About your team The Data Platform team manage the products and technical infrastructure that underpin the use of data at Fidelity - Databases (Oracle, SQL Server, PostgreSQL), data streaming (Kafka, Snaplogic), data security, data lake (Snowflake), data analytics (PowerBI, Oracle Analytics Cloud), data management, and more. We provide both cloud-based and on-premises solutions as well as automation and self-service tools. The company predominately uses AWS, but we also deploy on Azure and Oracle Cloud in certain situations. About your role You role will be to use your skills (and the many skills you will acquire on the job) for developing Infrastructure as Code (IaC) solutions using Terraform and Python, with a strong focus on AWS and Azure environments. A significant aspect of this role involves creating and maintaining Terraform modules to standardise and optimise infrastructure provisioning across these platforms, with particular emphasis on the database components of the infrastructure. You will also develop CI/CD automation processes to enhance operational efficiency and reduce manual intervention, ensuring robust, scalable, and secure cloud infrastructure management. About you You will be a motivated, curious and technically savy person who is always collaborative and keeps the customer in mind with the work you perform. Required skills are: A strong development & infrastructure engineering background with hands-on experience in Infrastructure as Code solutions across multiple providers, focusing on delivering automated, scalable, and resilient infrastructure management. Proven practical experience in implementing effective automation solutions that meet infrastructure requirements, along with the ability to identify risks with mitigating actions. Practical experience of implementing simple & effective cloud and/or database solutions Strong working knowledge of fundamental AWS concepts, such as IAM, networking, security, compute (Lambda, EC2), S3, SQS/SNS, scheduling tools Python, Bash and SQL programming (PowerShell a bonus) Oracle or PostgreSQL database knowledge a bonus Experience of delivering change through CI/CD using Terraform (Github Actions a bonus) Ability to work on tasks as a team player using Kanban Agile methodology, share knowledge and deal effectively with people from other company departments Transparency of work with others to ensure maximum knowledge transfer & collaboration across global teams. Highly motivated team player forming strong relationships with colleagues with excellent interpersonal skills. Excellent verbal & written communication in English

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Title Technical Specialist Department Enterprise Engineering - Data Management Team Location Bangalore Level Grade 4 Introduction Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our data management platform team in Enterprise Engineering and feel like youre part of something bigger. About your team Enterprise Data Management Team has been formed to execute FILs data strategy and be a data-driven organization. The team would be responsible for providing standards and policies and manage central data projects working with data programmes of various business functions across the organization in a hub-and-spoke model. The capabilities of this team includes data cataloguing and data quality tooling. The team would also ensure the adoption of tooling, enforce the standards and deliver on foundational capabilities. About your role The successful candidate is expected to be a part of Enterprise Engineering team; and work on Data management platform. We are looking for a skilled Technical Specialist to join our dynamic team to build and deliver capabilities for data management platform to realise organisations data strategy. About you Key Responsibilities Create scalable solutions for data management, ensuring seamless integration with data sources, consistent metadata management, reusable data quality rules and framework. Develop robust APIs to facilitate the efficient retrieval and manipulation of data from a range of internal and external data sources. Integrate with diverse systems and platforms, ensuring data flows smoothly and securely between sources and our data management ecosystem. Design and implement self-service workflows to empower data role holders, enhancing accessibility and usability of the data management platform. Collaborate with product owner to understand requirements and translate them into technical solutions that promote data management and operational excellence. Work with data engineers within the team and guide them with technical direction and establishing coding best practices. Mentor junior team members, fostering a culture of continuous improvement and technical excellence. Work to implement devops pipelines and ensure smooth, automated deployment of data management solutions Monitor performance and reliability, proactively addressing issues and optimizing system performance. Stay up-to-date with emerging technologies, especially in GenAI, and incorporate advanced technologies to enhance existing frameworks and workflows Experience and Qualifications Required B.E./B.Tech. or M.C.A. in Computer Science from a reputed University 7+ years of relevant industry experience Experience of complete SDLC cycle Experience of working with multi-cultural and geographically disparate teams Essential Skills (Technical) Strong proficiency in Python, with a good understanding of its ecosystems. Experience with the Python libraries and frameworks such as Pandas, Requests, Flask, FastAPI, and web development concepts. Experience with RESTful APIs and microservices architecture. Deep understanding of AWS cloud services such as EC2, S3, Lambda, RDS, and experience in deploying and managing applications on AWS. Understanding of software development principles and design patterns. Candidate should have experience with Jenkins pipeline, hands on experience in writing testable code and unit testing Stay up to date with the latest releases and features to optimize system performance. Desirable Skills Experience Experience with database systems like Oracle, AWS RDS, DynamoDB Ability to implement test driven development Understanding of the Data Management concepts & its implementation using python Good knowledge of Unix scripting and windows platform Optimize data workflows for performance and efficiency. Ability to analyse complex problems in a structured manner and demonstrate multitasking capabilities. Personal Characteristics Excellent interpersonal and communication skills Self-starter with ability to handle multiple tasks and priorities Maintain a positive attitude that promotes teamwork within the company and a favourable image of the team Must have an eye for detail and analyse/relate to the business problem in hand Ability to develop & maintain good relationships with stakeholders Flexible and positive attitude, openness to change Self-motivation is essential, should demonstrate commitment to high quality solution Ability to discuss both business and related technology/system at various levels

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Noida, India

Work from Office

Full-stack developer with 6 8 years of experience in designing and developing robust, scalable, and maintainable applications applying Object Oriented Design principles . Strong experience in Spring frameworks like Spring Boot, Spring Batch, Spring Data etc. and Hibernate, JPA. Strong experience in microservices architecture and implementation Strong knowledge of HTML, CSS and JavaScript, React Experience with SOAP Web-Services, REST Web-Services and Java Messaging Service (JMS) API. Familiarity designing, developing, and deploying web applications using Amazon Web Services (AWS). Good experience on AWS Services - S3, Lambda, SQS, SNS, DynamoDB, IAM, API Gateways Hands on experience in SQL, PL/SQL and should be able to write complex queries. Hands-on experience in REST-APIs Experience with version control systems (e.g., Git) Knowledge of web standards and accessibility guidelines Knowledge of CI/CD pipelines and experience in tools such as JIRA, Splunk, SONAR etc . Must have strong analytical and problem-solving abilities Good experience in JUnit testing and mocking techniques Experience in SDLC processes (Waterfall/Agile), Docker, Git, SonarQube Excellent communication and interpersonal skills, Ability to work independently and as part of a team. Mandatory Competencies Java - Core JAVA Others - Micro services Java Fullstack - React JS Java Fullstack - HTML CSS Java - Spring Framework Core Java Others - Spring Boot Cloud - AWS Java Others - Spring Batch Java - Hibernate/JPA Java Fullstack - Javascript Data on Cloud - AWS S3 Cloud - AWS Lambda Java - SQL Agile - Agile Java Fullstack - WebServies/REST Fundamental Technical Skills - Spring Framework/Hibernate/Junit etc. Beh - Communication and collaboration

Posted 1 month ago

Apply

5.0 - 10.0 years

27 - 37 Lacs

Bengaluru

Work from Office

Konovo is a global healthcare intelligence company on a mission to transform research through technology- enabling faster, better, connected insights. Konovo's solutions empower organizations to make data-driven decisions that enhance patient outcomes and streamline healthcare processes. We supply healthcare organizations with real-time access to over 2 million healthcare professionals, the largest available anywhere in the world. Our 200+ employees are spread across 25 U.S. states and five countries, collaborating to support some of the largest organizations in the healthcare industry. Our customers include over 300 leading global pharmaceutical, medical device, market research agency, and consultancy companies. About the Role As we move towards a product and platform-driven organisations from a services-based model, we are expanding our Bengaluru, India team. We are seeking a Senior Software Engineer to help design, build, and enhance cutting-edge solutions that power Konovo's platform. This role requires strong technical skills, a passion for building robust and innovative software, and the ability to collaborate effectively within a global, cross-functional environment. We are an established but fast-growing business—powered by innovation, data, and technology. Konovo’s capabilities are delivered through our cloud-based platform, enabling customers to collect data from healthcare professionals and transform it into actionable insights using cutting-edg e AI in conjunction with proven market research tools and techniques. As a Senior Software Engineer at Konovo, you will have the opportunity to design and implement the products that drive value for our customers, and shape our product and platform-driven solutions. Join us as a Senior Software Engineer and play a key role in shaping cutting-edge solutions, mentoring others, and driving innovative product capabilities at Konovo! How You’ll Make an Impact: Build and Optimize : Design, develop, and deploy high-quality software solutions that power Konovo’s global healthcare insights platform. Contribute to Agile Teams : Work closely within a cross-functional scrum team (Software, Quality, and Data Engineers, along with Product and Design) to iterate quickly and deliver impactful features. Drive Technical Excellence : Advocate for best practices in coding, architecture, testing, and performance optimization. Collaborate Globally : Engage with teams and stakeholders across multiple geographies, aligning technical work with broader business goals and standards. Mentor and Share Knowledge : While your primary role is as an individual contributor, provide guidance to junior engineers, helping them grow and improve their technical expertise . Champion Innovation : Actively participate in brainstorming sessions, sprint planning, and architectural reviews to propose creative solutions and help shape the technical direction of the team. Ensure Quality : Build high quality software , designing quality and security into solutions, validating functionality with unit tests, and owning the quality of your deliverables . What We’re Looking For: 5+ years of professional experience in software development, ideally working on complex, scalable applications. Strong communication and interpersonal skills , with the ability to collaborate effectively across departments and levels of the organization. Self-starter with an ability to think strategically, creatively, and analytically. Passion for learning new technologies and solving complex problems. Demonstrated expertise in: Agile methodology and tools, applied in a fast-paced environment. Software architecture and design for complex, real-world systems. Software craftsmanship, including SDLC, CI/CD, quality, and monitoring. Working with cloud technology (AWS preferred) in a SaaS environment. Supporting business-critical systems and products. Bachelor’s or Master’s degree in Computer Science (or equivalent). Preferred Tech Stack : Back end: Scala, Java, NodeJS (Lambda), Python Front end: JavaScript, React, Backbone Database: SQL, NoSQL (MongoDB/DocumentDB) AI/ML: Familiarity with ML/AI capabilities and concepts and eagerness to integrate them into our product offerings Why Konovo ? Work on cutting-edge AI-powered solutions and industry-leading services that transform healthcare insights. Be part of a mission-driven company that is revolutionizing healthcare decision-making. Join a fast-growing global team with career advancement opportunities. Thrive in a collaborative hybrid work environment that values innovation and flexibility. Make a real-world impact by helping healthcare organizations innovate faster. This is just the beginning of what we can achieve together. Join us at Konovo and help shape the future of healthcare technology! Apply now to be part of our journey.

Posted 1 month ago

Apply

2.0 - 7.0 years

20 - 35 Lacs

Pune

Remote

As a software engineer focused on Marketing and Customer Engagement at GoDaddy, you will have the opportunity to design, build, and maintain a platform that is a keystone to our customer experience, marketing, and business objectives. Everything we do starts with data. Ensure our team continues with a Shift Left” focus on security. This includes the design and development of systems that can contain sensitive customer information. You will partner closely and collaborate with other GoDaddy teams of Engineers, Marketing Professionals, QA and Operations teams. Leverage industry best practices and methodologies such as Agile, Scrum, testing automation and Continuous Integration and Deployment. Your experience should include 2+ years in software engineering, with 2+ years using AWS. Programming languages: C# and Python, along with SQL and Spark. The engineering position requires a minimum three-hour overlap with team members in the US-Pacific time zone. Strong experience with some (or all) of the following: Lambda and Step functions, API Gateway, Fargate, ECS, S3, SQS, Kinesis, Firehose, DynamoDB, RDS, Athena, and Glue. Solid foundation in data structures and algorithms and in-depth knowledge and passion for coding standards and following proven design patterns. RESTful and GraphQL APIs are examples. You might also have... DevOps experience is a plus, GitHub, GitHub Actions, Docker. Experience building CI/CD and server/deployment automation solutions, and container orchestration technologies.

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Noida

Work from Office

VM, VNet, NSG, Load Balancer, Azure Firewall, Azure AD, Azure Front Door, Azure Backup, WAF Proficiency in Azure Portal, Azure CLI, PowerShell Knowledge of managing multiple Azure subscriptions and policies Deploy/manage Web Apps and SQL databases in Azure Use of Azure Monitor, Log Analytics, for performance monitoring Basic knowledge of Azure Virtual Desktop and cost optimization strategies Any of EC2, S3, IAM, VPC, RDS, CloudWatch, CloudTrail, Lambda Understanding of AWS Organizations and consolidated billing/security controls IAM, RBAC, MFA, Conditional Access, least privilege, patching, compliance User/license management, general Office 365 administration Awareness and basic usage knowledge of Google Cloud Mandatory Competencies Cloud - Azure Cloud - GCP Others - Office 365 Beh - Communication Database - SQL Fundamental Technical Skills - DB Programming SQL/Oracle Data on Cloud - AWS S3

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies