Jobs
Interviews

1875 Sqs Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description: We are looking for a highly capable Backend developer to optimize our web-based application performance. Your primary focus will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to requests from the server-side. You will be collaborating with our front-end application developers, designing back-end components, and integrating data storage and protection solutions. Responsibilities: ● Working with the team, collaborating with other engineers, Frontend teams, and product teams to design and build backend applications and services ● Completely own the application features end to end; through design, development, testing, launch, and post-launch support ● Deploy and maintain applications on cloud-hosted platforms. ● Build performant, scalable, secure, and reliable applications. ● Write high-quality, clean, maintainable code and perform peer code reviews. ● Develop backend server code, APIs, and database functionality ● Propose coding standards, tools, frameworks, automation, and processes for the team. ● Lead technical architecture and design for application development ● Work on POCs, try new ideas, influence product road map Skills and Qualifications: ● At least 5+ years of experience in Node.js, MySQL & backend development ● Experience in PHP and NoSQL is preferred ● Exceptional communication, organization, and leadership skills ● Excellent debugging and optimization skills ● Experience designing and developing RESTful APIs ● Expert level with Web Server setup/management with at least one of Nginx, Tomcat including troubleshooting and setup on a cloud environment ● Experience with relational SQL and No SQL databases, familiarity with SQL/No SQL and Graph databases, specifically MySQL, Neo4j, Elastic, Redis, etc. with hands-on experience in using AWS technologies like EC2 lambda functions, SNS, SQS and worked on serverless architecture and having an exceptional track record in cloud ops for a live app ● Branching and Version Control best practices ● Expertise in building scalable micro-services, database design, and service architecture ● Solid foundation in computer science with strong competency in OOPS, data structures, algorithms, and software design ● Strong Linux skills with troubleshooting, monitoring, and log file setup/analysis experience ● Troubleshooting application and code issues ● Knowledge setting up unit tests ● Understanding of system design ● Updating and altering application features to enhance performance ● Writing clean, high-quality, high-performance, maintainable code, and participating in code reviews ● Coordinate cross-functionally to ensure the project meets business objectives and compliance standards ● Experience with Agile or Scrum software development methodologies ● Knowledge expected in Cloud Computing, threading, performance tuning, and security Preferred Qualifications: ● High ownership & right attitude towards work ● Interest in learning new tools and technologies ● Proficiency in designing and coding web applications and/or services, ensuring high quality and performance, fixing application bugs, maintaining the code, and deploying apps to various environments ● Bachelor’s degree in Computer Science or Software Engineering preferred

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development

Posted 1 month ago

Apply

6.0 - 10.0 years

25 - 30 Lacs

Hyderabad

Work from Office

We seek a Technical Lead with deep expertise in MEAN/MERN stacks to lead a team of 10 developers, ensuring timely delivery of scalable, high-performance applications. You will architect solutions, optimize databases, manage sprint workflows, and guide the transition to microservices. Your role will balance hands-on coding, team mentorship, and strategic planning to align with business goals. Key Responsibilities: Team Leadership & Delivery Lead a team of 10 developers, ensuring adherence to timelines and resource efficiency. Drive sprint planning, task allocation, and daily standups to meet project milestones. Conduct code reviews and enforce best practices for maintainable, scalable code. Technical Expertise Design and develop CRM applications using MongoDB, Node.js, Angular, and React, handling millions of records with optimized queries and indexing. Plan and execute upgrades for Angular, Node.js, and MongoDB to ensure security and performance. Architect microservices-based solutions and modularize monolithic systems. Scalability & Performance Optimize database performance (MongoDB), APIs (Node.js), and frontend rendering (Angular/React). Implement caching, load balancing, and horizontal/vertical scaling strategies. DevOps & Cloud Build CI/CD pipelines for automated testing and deployment. Leverage AWS services (Lambda, SQS, S3, EC2) for serverless architectures and scalable infrastructure and related services in cloud space like GCP etc. Problem-Solving Debug complex issues across the stack, providing data-driven solutions. Anticipate risks (e.g., bottlenecks, downtime) and implement preventive measures. Mandatory Requirements 1. 6-9 years of hands-on experience in MEAN/MERN stacks, including: 2. MongoDB: Schema design, aggregation pipelines, sharding, replication. 3. Node.js: REST/GraphQL APIs, middleware, asynchronous processing. 4. Angular/React: State management, component lifecycle, performance tuning. 5. Proven expertise in Agile sprint management, resource tracking, and deadline-driven delivery. 6. Experience upgrading Angular, Node.js, and MongoDB in production environments. 7. Leadership skills with a track record of managing teams (810 members). 8. Strong grasp of microservices, event-driven architecture, and scalability patterns. 9. Analytical thinker with excellent debugging and problem-solving abilities. Preferred Skills DevOps: CI/CD pipelines (Jenkins/GitLab), Docker, Kubernetes. AWS: Lambda, SQS, S3, EC2, CloudFormation. Monitoring: New Relic, Prometheus, Grafana.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

About Us RoboMQ offers Hire2Retire , a Lightweight IGA (Identity, Governance and Administration) SaaS product that manages employee lifecycle from HR systems to Active Directory, Azure AD and Google Directory. Hire2Retire manages full employee lifecycle changes of new hire, change of role, terminations, and long-term leave from HR and creates and manages Identity, Access, Privilege and Resource assignments. In effect, it fully automates work typically done by a sysadmin avoiding 90% of the cost while providing superior "First Day at Work" experience and preventing security and compliance risks by ensuring role-based access controls and timely terminations. As a fast growing tech company we provide an environment of curiosity and learning to design cutting edge cloud & SaaS products coupled with fun and vibrant startup culture that has been providing accelerated growth to our people. https://www.robomq.io/about-us/ Location: Jaipur (Rajasthan) Position type: Full time Before you apply, make sure you have: 2+ years’ experience working in a DevOps, Platform Engineer or Site Reliability Engineer Role. B. Tech degree with relevant technical experience. Demonstrated ability to be on-call support to handle critical infrastructure issues. Ability to quickly learn new technologies and implement to our rapidly evolving product and business. Exceptional verbal and written communication skills. Experience working on distributed systems. Responsibilities Maintain and administer multiple multi-node Kubernetes clusters for high availability and optimum performance. Set up and manage logging, monitoring, and alerting using tools like Prometheus, Grafana, EFK, or CloudWatch. Design, implement, and manage CI/CD pipelines for seamless deployments. Work on the cloud infrastructure hosted on AWS to keep it secure and optimized. Automate infrastructure provisioning, scaling, and security compliance on AWS through Terraform. Strengthen cloud security through IAM policies, encryption, and vulnerability scans. Perform root cause analysis and system troubleshooting and implement improvements. Work with Penetration testing tools like NMAP to analyse and improve network security. Strengthening overall security including infrastructure security, webapp security and IAM security. Key Skills [Must have] Strong hands-on experience with Docker and Kubernetes. Strong understanding of Git and version control. CI /CD: Jenkins, GitHub, GitHub Actions Infrastructure as Code (experience on Terraform) Experience of deploying and managing cloud-based applications, preferably on AWS. Cloud Networking & Security fundamentals (IAM, firewalls, SSL, encryption). Excellent knowledge of shell scripting. Cyber Security: OWASP Top 10, NMAP, ZAP Additional Skills [Good to have] Helm charts: kOps SonarQube Monitoring: Prometheus, Grafana, Alert Manager. Logging: Elastic Search, FluentD, Kibana Networking: Istio, Kong Hands on experience with a programming language. Experience with message queues (Kafka, RabbitMQ, SQS) Familiarity with SRE (Site Reliability Engineering) practices

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Company They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About the Client Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title : Java Developer Key Skills : AWS EKS and/or Lambda, Java, Kafka, Kotlin etc. Job Locations : PAN India Experience: 6+ Years Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contractual Notice Period : Immediate - 10 Days Job description: Java Developer Key Responsibilities: Develop and Maintain Domain APIs on AWS EKS and/or Lambda Create high and low level design and implement capabilities using Micro services and Domain Driven Design principles Troubleshoot technical issues with in depth knowledge of technology and functional aspects Understand and build to Nonfunctional requirements like authorization access performance etc Assist in tracking and showcasing performance metrics and help optimize performance Provide innovative solutions for API Versioning strategies Assist in creation on Automated Test Suites which are interoperable across APIs Provide technical leadership to a team of developers ensuring adherence to best practices security standards and scalability Closely work with BA PO SM and other stakeholders to understand the requirements and ensure successfully delivery of product feature on time Participate in defect triage and analysis Support Go Live activities Requires Skills Qualifications: 6+ years of experience in developing Domain APIs using any distributed programming languages like Java, Kotlin etc. Must have a good understanding of SOAP and REST based integration patterns Knowledge of JSON and XML data structures Knowledge of SQL and No SQL Databases like MongoDB DynamoDB S3 PostgreSQL etc. Knowledge of various messaging services like AWS SQS AWS SNS RabbitMQ Kafka etc. Knowledge of AWS and Lambda Terraform Experience creating logs alerts and dashboard for visualization and troubleshooting Excellent problem solving and troubleshooting skills Ability to lead technical teams and mentor junior developers Communication skills both verbal and written ability to interact with stakeholders Knowledge of API Management bast practices and experience with API Gateways Understanding of security standards including OAuth SSO and encryption for integration and API security Work closely with Business Stakeholders to understand their needs and requirements and translate them into technical solutions Participate in endofiteration demos to showcase the key deliverables to IT and business stakeholders. Tech Stack: AWS EKS and Lambda Programming Language Java Kotlin Framework Spring Boot DB MongoDB DynamoDB Redis Cache PostgreSQL S3 Messaging Interface AWS SQS AWS SNS Kafka RabbitMQ Terraform for infrastructure provisioning Knowledge of Open API Specs OAS and ACORD NGDS is an added advantage.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description What We are looking for: We’re looking for a passionate and experienced Software Engineer to join our growing API & Exports team at Meltwater. This team is responsible for enabling programmatic access to data across the app—handling thousands of exports daily, improving API usability, and managing API integrations and performance at scale. You'll work on expanding and optimizing our export functionalities, building scalable APIs, and integrating robust monitoring and management. This is a high-impact team working at the core of our data delivery platform. What You'll Do: Own the design, development, and optimization of API and export features. Collaborate closely with product managers and senior engineers to define functionality and scale. Enhance developer experience by making APIs easier to consume and integrate. Participate in building robust export pipelines, streaming architectures, and webhook integrations. Maintain high observability and reliability standards using tools like Coralogix, CloudWatch, and Grafana. Participate in on-call rotations and incident response for owned services. What You'll Bring: 5+ years of software engineering experience with a strong focus on Golang (preferred), Java, or C++. Experience designing and developing RESTful APIs. Experience working with cloud-native applications (preferably AWS). Good understanding of microservice architecture and backend design principles. Solid knowledge of Postgres, Redis, and ideally DynamoDB. Nice To Have Familiarity with asynchronous or event-driven architectures using tools like SQS, SNS, or webhooks. Exposure to DevOps workflows and tools (Terraform, Docker, Kubernetes, etc.). Experience working with data exports, reporting systems, or data streaming. Experience improving developer experience around APIs (e.g., OpenAPI, Swagger, static site generators). Familiarity with JWT authentication, API gateways, and rate limiting strategies. Experience in accessibility and compliance standards for APIs and data handling. Experience with observability tools and practices. Our Tech Stack Languages: Golang, some JavaScript/TypeScript Infrastructure: AWS, S3, Lambda, SQS, SNS, CloudFront, Kubernetes (Helm), Kong Databases: Postgres, Redis, DynamoDB Monitoring: Coralogix, Grafana, OpenSearch CI/CD & IaC: GitHub Actions, Terraform What We Offer: Enjoy flexible paid time off options for enhanced work-life balance. Comprehensive health insurance tailored for you. Employee assistance programs cover mental health, legal, financial, wellness, and behaviour areas to ensure your overall well-being. Complimentary CalmApp subscription for you and your loved ones, because mental wellness matters. Energetic work environment with a hybrid work style, providing the balance you need. Benefit from our family leave program, which grows with your tenure at Meltwater. Thrive within our inclusive community and seize ongoing professional development opportunities to elevate your career. Where You'll Work: Hitec city, Hyderabad. When You'll Join: As per the offer letter Our Story At Meltwater, we believe that when you have the right people in the right environment, great things happen. Our best-in-class technology empowers our 27,000 customers around the world to make better business decisions through data. But we can’t do that without our global team of developers, innovators, problem-solvers, and high-performers who embrace challenges and find new solutions for our customers. Our award-winning global culture drives everything we do and creates an environment where our employees can make an impact, learn every day, feel a sense of belonging, and celebrate each other’s successes along the way. We are innovators at the core who see the potential in people, ideas and technologies. Together, we challenge ourselves to go big, be bold, and build best-in-class solutions for our customers. We’re proud of our diverse team of 2,200+ employees in 50 locations across 25 countries around the world. No matter where you are, you’ll work with people who care about your success and get the support you need to unlock new heights in your career. We are Meltwater. We love working here, and we think you will too. "Inspired by innovation, powered by people." Equal Employment Opportunity Statement Meltwater is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: At Meltwater, we are dedicated to fostering an inclusive and diverse workplace where every employee feels valued, respected, and empowered. We are committed to the principle of equal employment opportunity and strive to provide a work environment that is free from discrimination and harassment. All employment decisions at Meltwater are made based on business needs, job requirements, and individual qualifications, without regard to race, color, religion or belief, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, veteran status, or any other status protected by the applicable laws and regulations. Meltwater does not tolerate discrimination or harassment of any kind, and we actively promote a culture of respect, fairness, and inclusivity. We encourage applicants of all backgrounds, experiences, and abilities to apply and join us in our mission to drive innovation and make a positive impact in the world.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Team Impact We are seeking a dynamic software development engineer to maintain existing .NET applications while spearheading new development initiatives in Python with hands-on AWS experience. The candidate is expected to have SQL knowledge, enabling efficient data operations and management within existing and new projects. The ideal candidate will showcase proficiency in deploying software with adherence to best practices alongside fluency in development environments including tools, code libraries, and systems. This role involves being part of the entire development process, collaborating to create theoretical designs, critiquing code and productions for improvements, and effectively receiving and applying feedback. The candidate will demonstrate the ability to independently maintain productivity, requiring minimal manager oversight on a daily basis. The focus includes developing applications, testing & maintaining software, while implementing expansion of work volume with consistent quality and system stability. You will gain mastery of the products you contribute to and participate in forward design discussions for improvement based on observations of the code, systems, and production. Software Engineers are expected to provide project leadership and technical guidance across each stage of the software development lifecycle. What You'll Do Maintain existing .NET applications and systems, ensuring their stability and efficiency. Leverage SQL knowledge for efficient data operations, ensuring optimal database management and integration. Utilize Python and AWS tools to support ongoing development and maintenance, ensuring robust integration and functionality across application stacks. Develop and implement new features on our acquisition platform, which is designed to handle the sourcing of millions of documents annually, built on Microservice Architecture. Focus on developing new features and UI while supporting existing systems to ensure platform's continuous improvement. Engage with stakeholders to define technology and business roadmap for projects. Operate within agile frameworks, collaborating with engineers and product developers using tools like Jira and Confluence. Participate in test-driven development and elevate team practices through coaching and reviews. Create and review documentation and test plans to validate new features and system modifications thoroughly. Coordinate effectively as part of a geographically diverse team to facilitate seamless project progression. What We’re Looking For Bachelor’s or master’s degree in computer science, Engineering, or a related field. 3-5 years of experience in software development, especially systems managing large-scale data operations. Deep understanding of data structures and algorithms to optimize software performance. Proficient in object-oriented design principles. Strong skills in .NET, Python, SQL and AWS technologies to maintain and contribute to applications. C#/.NET proficiency for maintaining legacy applications. Hands-on AWS experience with resources like S3, EC2, Lambda, SQS, SNS, etc. Communication and Problem-Solving: Familiarity with GitHub-based development for seamless collaboration and version control. Experience in building and deploying production-level services for reliable solutions. Proven experience in API and System Integration for robust connectivity. Desired Skills Strong analytical and problem-solving skills. Excellent communication abilities for interaction with diverse teams and stakeholders. Demonstrated organizational skills to manage work effectively in fast-paced environments. What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn More About Our Benefits Here. Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn. At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Help empower our global customers to connect to culture through their passions. Why you’ll love this role As a Software Engineer, you’ll be working on ever changing pricing & finance domain having direct revenue and user impact. You get opportunity to work on how products are priced dynamically and how various influence the decision as part of fees & tax flows. You get to design, code & measure the impact of the product & feature developments that you’ll be part of.our technical stack comprises various systems and services built on Amazon Web Services. We use GraphQL, GoLang, NodeJS, CircleCI, Kubernetes, Harness, Terraform, LaunchDarkly and Datadog.The technology scope includes all stacks and services (APIs and event processing systems) responsible for providing a seamless experience for our customers.Our engineers are empowered to take ownership of technology decisions and solutions while playing a pivotal role in establishing a successful engineering culture at our fast-growing company. What You’ll Do Work with product owners, internal stakeholders, program managers and engineering managers to crystallize ambiguous requirements and propose resilient technical solutions which scale to future business needs Work with engineers in the team to take these proposed solutions and architect and design them Efficiently break up large system designs and guide the junior team members in detailed component design. Help the team to implement, deploy and monitor systems and services Propose and adopt best engineering practices and guide development standards Foster a growth mindset culture. Be a team player. Contribute to and follow team processes for better sprint outcomes Apply considerations around security, scalability, reliability, and performance while proposing and building solutions Use sound technical judgment to consider technology alternatives, impact on affected and adjacent systems, and design choice tradeoffs Demonstrate complete ownership of services for your area of work. Participate in supporting your systems and services through any system upgrades, live site issues and others Provide timely communication to stakeholders and users for resolving issues About You 2-4 years of relevant development experience Experience with distributed architecture. Proficiency in one or more back-end languages used by the team (NodeJS, GoLang) or equivalent experience in another language and a willingness to learn and get up to speed quickly. Excellent analytical, organizational and communication skills. Ability to say No Experience with data storage technologies, both relational and NoSQL Experience with event-based architecture and with related technologies like kafka, SNS, SQS etc. Experience with cloud platforms - Azure, AWS or Google Cloud Platform. Nice To Have Skills Experience with working in an Agile environment Ability to work in a fast paced and constantly changing environment Knowledge of GraphQL and REST Frameworks Exposure to Ci/CD frameworks and tools/technologies like Github, K8s, Harness About StockX StockX is proud to be a Detroit-based technology leader focused on the large and growing online market for sneakers, apparel, accessories, electronics, collectibles, trading cards, and more. StockX's powerful platform connects buyers and sellers of high-demand consumer goods from around the world using dynamic pricing mechanics. This approach affords access and market visibility powered by real-time data that empowers buyers and sellers to determine and transact based on market value. The StockX platform features hundreds of brands across verticals including Jordan Brand, adidas, Nike, Supreme, BAPE, Off-White, Louis Vuitton, Gucci; collectibles from brands including LEGO, KAWS, Bearbrick, and Pop Mart; and electronics from industry-leading manufacturers Sony, Microsoft, Meta, and Apple. Launched in 2016, StockX employs 1,000 people across offices and verification centers around the world. Learn more at www.stockx.com. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. However, this job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities or working conditions associated with the position. StockX reserves the right to amend this job description at any time. StockX may utilize AI to rank job applicant submissions against the position requirements to assist in determining candidate alignment.

Posted 1 month ago

Apply

6.0 - 10.0 years

30 - 40 Lacs

Bengaluru

Work from Office

You'll Get To: Provide technical expertise in requirements analysis, design, effort estimation, development, testing and delivery of highly scalable and secure distributed backend services. Work with product management, architects and other engineering teams to understand stated and unstated needs and turn them into functional and technical requirements. Maintain a strong sense of business value and return on investment in planning, design, and communication. Support technical design and architecture discussions and help drive technical decisions while making appropriate trade-offs on technology, functionality, robustness, performance and extensibility. Estimate the work scope and timelines, and consistently deliver on those commitments. Implement, refine, and enforce software development techniques to ensure that the delivered features meet software integration, performance, security, and maintainability expectations. Research, test, benchmark, and evaluate new tools and technologies, and recommend ways to implement them in product development. Maintain high standards of software quality and technical excellence within the team by establishing good practices and writing clean, testable, maintainable, and secure code. Contribute to a forward-thinking team of developers, acting as an agent of change and evangelist for a quality-first culture within the organization. Mentor and coach team members to guide them to solutions on complex design issues and do peer code reviews. Proactively identify issues, bottlenecks, gaps, or other areas of concerns or opportunities and work to either directly affect change, or advocate for that change. Perform critical maintenance, deployment, and release support activities, including occasional off-hours support. Technical/Specialized Knowledge, Skills, and Abilities: 7+ years of professional experience in building Web scale highly available multi-tenant SaaS with focus on backend platform, frameworks, RESTful APIs and microservices. Hands-on Experience with C# programming using .NET framework/.NET Core 2+ years of experience with a public cloud (AWS, Azure, or GCP) and solid understanding of cloud-native services. Extensive experience with SQL, relational database design, SQL query optimization Fluent in SQL, data modeling and transactional flows. A solid computer science foundation including data structures, algorithms, and design patterns, with a proven track record of writing high concurrency, multi-threaded, secure, scalable code. Proven experience in working with one or more services such as API gateway, identity management, authentication, messaging (Kafka or RabbitMQ), workflow orchestration, job scheduling and search. Superior analytical, problem-solving and system level performance analysis abilities. Excellent written and verbal communication skills. Adaptable team player with strong collaboration skills and a focus on results and value delivery. Experience working in an Agile development environment. Passion for engineering excellence through automation, unit testing, and process improvements. Were Even More Excited If You Have: Good knowledge of internet security issues in software design and code. Experience with ERP systems like MS Dynamics-365, Oracle, NetSuite, Intacct is a plus Experience with open source tools. Experience with public cloud architectures (Azure, ASW or GCP) and cloud native services. Experience designing and scaling high performance systems. Experience with container management solutions like Mesos, Kubernetes or Nomad. Experience with API gateway, identity management, authentication, messaging platforms (e.g: Kafka, SQS, RabbitMQ), workflow orchestration tools, job scheduling and search. FinTech or Financial services domain background. Prior working experience in Scrum, or other Agile development methodologies, is preferred. Experience with front-end technologies (HTML, JavaScript, CSS, JavaScript Frameworks, etc.) is a plus. Experience with data integration and middleware software tools is a plus

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Responsible for building high-quality, innovative and fully performing software in compliance with coding standards and technical design. Design, modify, develop, write and implement software programming applications. Support and/or install software applications. Key participant in the testing process through test review and analysis, test witnessing and certification of software. Key Responsibilities Develop software solutions by studying information needs; conferring with users; studying systems flow, data usage and work processes; investigating problem areas; following the software development lifecycle; Document and demonstrate solutions; Develops flow charts, layouts and documentation Determine feasibility by evaluating analysis, problem definition, requirements, solution development and proposed solutions; Understand business needs and know how to create the tools to manage them Prepare and install solutions by determining and designing system specifications, standards and programming Recommend state-of-the-art development tools, programming techniques and computing equipment; participate in educational opportunities; read professional publications; maintain personal networks; participate in professional organizations; remain passionate about great technologies, especially open source Provide information by collecting, analyzing, and summarizing development and issues while protecting IT assets by keeping information confidential; Improve applications by conducting systems analysis recommending changes in policies and procedures Define applications and their interfaces, allocate responsibilities to applications, understand solution deployment, and communicate requirements for interactions with solution context, define Nonfunctional Requirements (NFRs) Understands multiple architectures and how to apply architecture to solutions; understands programming and testing standards; understands industry standards for traditional and agile development Provide oversight and foster Built-In Quality and Team and Technical Agility; Adopt new mindsets and habits in how people approach their work while supporting decentralized decision making. Maintain strong relationships to deliver business value using relevant Business Relationship Management practices. Responsibilities Competencies: Business insight - Applying knowledge of business and the marketplace to advance the organization’s goals. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Global perspective - Taking a broad view when approaching issues, using a global lens. Manages conflict - Handling conflict situations effectively, with a minimum of noise. Agile Architecture - Designs the fundamental organization of a system embodied by its components, their relationships to each other and to the environment to guide its emergent design and evolution. Agile Development - Uses API-First Development where requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s) to construct high-quality, well designed technical solutions; understands and includes the Internet of Things (IoT), the Digital Mesh, and Hyper Connectivity as inputs to API-First Development so solutions are more adaptable to future trends in Agile development. Agile Systems Thinking - Embraces a holistic approach to analysis that focuses on the way that a system's constituent parts interrelate and how systems work over time and within the context of larger systems to ensure the economic success of the solution. Agile Testing - Leads a cross-functional agile team with special expertise contributed by testers working at a sustainable pace, by delivering business value desired by the customer at frequent intervals to ensure the economic success of the solution. Regulatory Risk Compliance Management - Evaluates the design and effectiveness of controls against established industry frameworks and regulations to assess adherence with legal/regulatory requirements. Solution Functional Fit Analysis - Composes and decomposes a system into its component parts using procedures, tools and work aides for the purpose of studying how well the component parts were designed, purchased and configured to interact holistically to meet business, technical, security, governance and compliance requirements. Solution Modeling - Creates, designs and formulates models, diagrams and documentation using industry standards, tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in Computer Science, Engineering, or related subject, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience: 5-7 years of relevant work experience required - Strong background on Angular (UI), Spring Boot (API/microservices), and relational/NoSQL databases in a serverless AWS environment (e.g., Lambda, API Gateway, DynamoDB, Aurora Serverless). Strong knowledge on serverless microservices leveraging AWS services such as Lambda, Step Functions, SQS, SNS, and Event Bridge,Fargate Design schema, write complex queries, and optimize performance for Oracle, PostgreSQL, and/or DynamoDB DevOps automation using AWS Code Pipeline/Code Build, GitHub Actions, and Terraform or AWS CDK Knowledge on using AI tools – Copilot Knowledge on Automated unit testing. Experience working as a software engineer with the following knowledge and experiences are preferred: - Working in Agile environments; - Fundamental IT technical skill sets; - Taking a system from coping requirements through actual launch of the system; - Communicating with users, other technical teams and management to collect requirements, identify tasks, provide estimates and meet production deadlines; - Professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing and operations. Note : This role follows a hybrid work model, requiring 2–3 days per week in the office. Qualifications Design and Develop End-to-End Solutions Lead the development of scalable and maintainable full-stack solutions using Angular (UI), Spring Boot (API/microservices), and relational/NoSQL databases in a serverless AWS environment (e.g., Lambda, API Gateway, DynamoDB, Aurora Serverless). Architect Serverless Applications Design and implement serverless microservices leveraging AWS services such as Lambda, Step Functions, SQS, SNS, and EventBridge to build event-driven architectures. Optimize API Performance and Security Build and tune RESTful APIs with Spring Boot, ensuring optimal performance, scalability, and security using OAuth2, JWT, and API Gateway policies. UI/UX Implementation and Responsiveness Deliver pixel-perfect, responsive, and accessible front-end applications using Angular and integrate them with backend services. Database Design and Query Optimization Design schema, write complex queries, and optimize performance for Oracle, PostgreSQL, and/or DynamoDB to support business logic and reporting needs. CI/CD and Infrastructure as Code (IaC) Contribute to DevOps automation using AWS CodePipeline/CodeBuild, GitHub Actions, and Terraform or AWS CDK for seamless deployments and infrastructure provisioning. Code Reviews and Mentorship Conduct code reviews, promote best practices in software development, and mentor junior developers across the stack. Monitoring, Logging, and Debugging Integrate and maintain observability tools like CloudWatch, X-Ray, Dynatrace, etc to ensure high availability and system reliability. Collaboration with Cross-Functional Teams Work closely with product managers, UI/UX designers, and QA to deliver high-quality features aligned with business goals. Stay Current with Emerging Technologies Proactively research, evaluate, and implement new tools or practices that improve productivity, performance, and code quality.

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Ciklum is looking for a Senior QA Automation Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Senior QA Automation Engineer, become a part of a cross-functional development team for A healthcare technology company that provides platforms and solutions to improve the management and access of cost-effective pharmacy benefits. Our technology helps enterprise and partnership clients simplify their businesses and helps consumers save on prescriptions. As a leader in SaaS technology for healthcare, we offer innovative solutions with integrated intelligence on a single enterprise platform that connects the pharmacy ecosystem. With our expertise and modern, modular platform, our partners use real-time data to transform their business performance and optimize their innovative models in the marketplace. Responsibilities: Develop automated functional UI & API tests Requirements analysis and processing Estimation of the testing activities Test cases/checklists and other types of test documentation creation Perform testing activities Integrate automated scripts into the CI/CD process Develop, maintain, and expand automated testing infrastructure Leads code reviews, sets high-quality standards and guides team members in writing better code Define Quality Metrics and implement measurements to determine test effectiveness, and testing efficiency, and measure the overall quality of the Product as a part of the test automation process Analyze test results and report about stability of the product under test Collaborate with other members of the team to automate the manual test processes Manage, analyze, and mitigate testing risks Onboarding of new team members on the project Suggests improvements to testing and release workflow Requirements: We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you’re a good fit! Solid skills in C#, programming patterns, and principles Experience in automation scripts using UI and REST APIs using automation tools (e.g. Selenium, RestSharp) Experience with TDD / unit tests Experience in using Chat GPT/Copilot for test automation/unit testing Experience with implementation of continuous integration processes and tools, CI/CD pipelines Hands-on experience with Docker Knowledge of AWS/Azure DevOps services Familiarity with different automated test frameworks (NUnit, MSTest, xUnit, etc..) Experience in the Healthcare domain Desirable: Experience with API test automation of microservices-based applications Experience with message brokers/queues: SQS, SNS, Kafka, RabbitMQ, etc Good Knowledge of SQL and no-SQL DBs (PostgreSQL, mongo, redis, etc.) Excellent knowledge of different testing methods, techniques, types, and methodologies Experience in the creation or active maintenance of test documentation (test strategy, test plans, etc.) Experience in the creation of automation frameworks from scratch Experience of working in an Agile Scrum/Kanban development environment Good English skills – Upper-Intermediate and above What`s in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.

Posted 1 month ago

Apply

5.0 years

18 - 22 Lacs

Ernakulam, Kerala, India

On-site

Company: Velodata Website: Visit Website Business Type: Startup Company Type: Product & Service Business Model: B2B Funding Stage: Pre-seed Industry: Data Analytics Salary Range: ₹ 18-22 Lacs PA Job Description Please note that we are currently looking for Python Developers with strong hands-on experience in Python and SQL. In addition, candidates must have development experience in AWS, not just deployment with proficiency in at least two or more of the following services (Lambda, SNS, SQS, S3, Glue, Athena, API Gateway, EC2, Deployment, CloudFormation, CloudFront, EventBridge) We are seeking a Lead Python Developer to join our dynamic team. The ideal candidate will have a strong background in Python programming. Sound understanding on we application development, with a focus on utilizing AWS services for building scalable and efficient solutions. Responsible for delivering senior-level innovative, compelling, coherent software solutions for our consumer, internal operations, and value chain constituents across a wide variety of enterprise applications through the creation of discrete business services and their supporting components. Job Description / Duties & Responsibilities Take shared ownership of the product. Communicates effectively both verbally and in writing. Takes direction from team lead and upper management. Ability to work with little to no supervision while performing duties. Works collaboratively in a small team. Excels in a rapid iteration environment with short turnaround times. Deals positively with high levels of uncertainty, ambiguity, and shifting priorities. Accepts a wide variety of tasks and pitches wherever needed. Constructively presents, discuss and debates alternatives Job Specification / Skills And Competencies Design, develop and deliver solutions that meet business line and enterprise requirements. Lead a team of Python developers, providing technical guidance, mentorship, and support in project execution. Participates in rapid prototyping and POC development efforts. Advances overall enterprise technical architecture and implementation best practices. Assists in efforts to develop and refine functional and non-functional requirements. Participates in iteration and release planning. Performs functional and non-functional testing. Informs efforts to develop and refine functional and non-functional requirements. Demonstrates knowledge of, adherence to, monitoring and responsibility for compliance with state and federal regulations and laws as they pertain to this position. Strong ability to produce high-quality, properly functioning deliverables the first time. Delivers work product according to established deadlines. Estimates tasks with a level of granularity and accuracy commensurate with the information provided. Architect, design, and implement high-performance and scalable Python back-end applications. Proficient in Python programming language to develop backend services and APIs. Experience with any web frameworks such as Fast API/Flask/Django for building RESTful APIs. Exposure in Utility domain is an advantage (Metering Services). Experience in AWS services such as API Gateway, Lambda, Step functions and S3. Knowledge in Implementing authentication and authorization mechanisms using AWS Cognito and other relevant services. Good understanding on databases Including PostgreSQL, MongoDB, AWS Aurora, DynamoDB. Experience in automated CI/CD implementation using terraform is required. Deep understanding of one or more source/version control systems (GIT/Bitbucket). Develop branching and merging strategies. Working understanding of Web API, REST, JSON etc. Working understanding of unit testing creation. Bachelor’s Degree is required, and/or a minimum of 5+ years of related work experience. To adhere to the Information Security Management policies and procedures.

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Ciklum is looking for a Senior QA Automation Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Senior QA Automation Engineer, become a part of a cross-functional development team for A healthcare technology company that provides platforms and solutions to improve the management and access of cost-effective pharmacy benefits. Our technology helps enterprise and partnership clients simplify their businesses and helps consumers save on prescriptions. As a leader in SaaS technology for healthcare, we offer innovative solutions with integrated intelligence on a single enterprise platform that connects the pharmacy ecosystem. With our expertise and modern, modular platform, our partners use real-time data to transform their business performance and optimize their innovative models in the marketplace. Responsibilities: Develop automated functional UI & API tests Requirements analysis and processing Estimation of the testing activities Test cases/checklists and other types of test documentation creation Perform testing activities Integrate automated scripts into the CI/CD process Develop, maintain, and expand automated testing infrastructure Leads code reviews, sets high-quality standards and guides team members in writing better code Define Quality Metrics and implement measurements to determine test effectiveness, and testing efficiency, and measure the overall quality of the Product as a part of the test automation process Analyze test results and report about stability of the product under test Collaborate with other members of the team to automate the manual test processes Manage, analyze, and mitigate testing risks Onboarding of new team members on the project Suggests improvements to testing and release workflow Requirements: We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you’re a good fit! Solid skills in C#, programming patterns, and principles Experience in automation scripts using UI and REST APIs using automation tools (e.g. Selenium, RestSharp) Experience with TDD / unit tests Experience in using Chat GPT/Copilot for test automation/unit testing Experience with implementation of continuous integration processes and tools, CI/CD pipelines Hands-on experience with Docker Knowledge of AWS/Azure DevOps services Familiarity with different automated test frameworks (NUnit, MSTest, xUnit, etc..) Experience in the Healthcare domain Desirable: Experience with API test automation of microservices-based applications Experience with message brokers/queues: SQS, SNS, Kafka, RabbitMQ, etc Good Knowledge of SQL and no-SQL DBs (PostgreSQL, mongo, redis, etc.) Excellent knowledge of different testing methods, techniques, types, and methodologies Experience in the creation or active maintenance of test documentation (test strategy, test plans, etc.) Experience in the creation of automation frameworks from scratch Experience of working in an Agile Scrum/Kanban development environment Good English skills – Upper-Intermediate and above What`s in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Responsible for building high-quality, innovative and fully performing software in compliance with coding standards and technical design. Design, modify, develop, write and implement software programming applications. Support and/or install software applications. Key participant in the testing process through test review and analysis, test witnessing and certification of software. Key Responsibilities Develop software solutions by studying information needs; conferring with users; studying systems flow, data usage and work processes; investigating problem areas; following the software development lifecycle; Document and demonstrate solutions; Develops flow charts, layouts and documentation Determine feasibility by evaluating analysis, problem definition, requirements, solution development and proposed solutions; Understand business needs and know how to create the tools to manage them Prepare and install solutions by determining and designing system specifications, standards and programming Recommend state-of-the-art development tools, programming techniques and computing equipment; participate in educational opportunities; read professional publications; maintain personal networks; participate in professional organizations; remain passionate about great technologies, especially open source Provide information by collecting, analyzing, and summarizing development and issues while protecting IT assets by keeping information confidential; Improve applications by conducting systems analysis recommending changes in policies and procedures Define applications and their interfaces, allocate responsibilities to applications, understand solution deployment, and communicate requirements for interactions with solution context, define Nonfunctional Requirements (NFRs) Understands multiple architectures and how to apply architecture to solutions; understands programming and testing standards; understands industry standards for traditional and agile development Provide oversight and foster Built-In Quality and Team and Technical Agility; Adopt new mindsets and habits in how people approach their work while supporting decentralized decision making. Maintain strong relationships to deliver business value using relevant Business Relationship Management practices. Responsibilities Competencies: Business insight - Applying knowledge of business and the marketplace to advance the organization’s goals. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Global perspective - Taking a broad view when approaching issues, using a global lens. Manages conflict - Handling conflict situations effectively, with a minimum of noise. Agile Architecture - Designs the fundamental organization of a system embodied by its components, their relationships to each other and to the environment to guide its emergent design and evolution. Agile Development - Uses API-First Development where requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s) to construct high-quality, well designed technical solutions; understands and includes the Internet of Things (IoT), the Digital Mesh, and Hyper Connectivity as inputs to API-First Development so solutions are more adaptable to future trends in Agile development. Agile Systems Thinking - Embraces a holistic approach to analysis that focuses on the way that a system's constituent parts interrelate and how systems work over time and within the context of larger systems to ensure the economic success of the solution. Agile Testing - Leads a cross-functional agile team with special expertise contributed by testers working at a sustainable pace, by delivering business value desired by the customer at frequent intervals to ensure the economic success of the solution. Regulatory Risk Compliance Management - Evaluates the design and effectiveness of controls against established industry frameworks and regulations to assess adherence with legal/regulatory requirements. Solution Functional Fit Analysis - Composes and decomposes a system into its component parts using procedures, tools and work aides for the purpose of studying how well the component parts were designed, purchased and configured to interact holistically to meet business, technical, security, governance and compliance requirements. Solution Modeling - Creates, designs and formulates models, diagrams and documentation using industry standards, tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in Computer Science, Engineering, or related subject, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience: 5-7 years of relevant work experience required - Strong background on Angular (UI), Spring Boot (API/microservices), and relational/NoSQL databases in a serverless AWS environment (e.g., Lambda, API Gateway, DynamoDB, Aurora Serverless). Strong knowledge on serverless microservices leveraging AWS services such as Lambda, Step Functions, SQS, SNS, and Event Bridge,Fargate Design schema, write complex queries, and optimize performance for Oracle, PostgreSQL, and/or DynamoDB DevOps automation using AWS Code Pipeline/Code Build, GitHub Actions, and Terraform or AWS CDK Knowledge on using AI tools – Copilot Knowledge on Automated unit testing. Experience working as a software engineer with the following knowledge and experiences are preferred: - Working in Agile environments; - Fundamental IT technical skill sets; - Taking a system from coping requirements through actual launch of the system; - Communicating with users, other technical teams and management to collect requirements, identify tasks, provide estimates and meet production deadlines; - Professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing and operations. Note : This role follows a hybrid work model, requiring 2–3 days per week in the office. Qualifications Design and Develop End-to-End Solutions Lead the development of scalable and maintainable full-stack solutions using Angular (UI), Spring Boot (API/microservices), and relational/NoSQL databases in a serverless AWS environment (e.g., Lambda, API Gateway, DynamoDB, Aurora Serverless). Architect Serverless Applications Design and implement serverless microservices leveraging AWS services such as Lambda, Step Functions, SQS, SNS, and EventBridge to build event-driven architectures. Optimize API Performance and Security Build and tune RESTful APIs with Spring Boot, ensuring optimal performance, scalability, and security using OAuth2, JWT, and API Gateway policies. UI/UX Implementation and Responsiveness Deliver pixel-perfect, responsive, and accessible front-end applications using Angular and integrate them with backend services. Database Design and Query Optimization Design schema, write complex queries, and optimize performance for Oracle, PostgreSQL, and/or DynamoDB to support business logic and reporting needs. CI/CD and Infrastructure as Code (IaC) Contribute to DevOps automation using AWS CodePipeline/CodeBuild, GitHub Actions, and Terraform or AWS CDK for seamless deployments and infrastructure provisioning. Code Reviews and Mentorship Conduct code reviews, promote best practices in software development, and mentor junior developers across the stack. Monitoring, Logging, and Debugging Integrate and maintain observability tools like CloudWatch, X-Ray, Dynatrace, etc to ensure high availability and system reliability. Collaboration with Cross-Functional Teams Work closely with product managers, UI/UX designers, and QA to deliver high-quality features aligned with business goals. Stay Current with Emerging Technologies Proactively research, evaluate, and implement new tools or practices that improve productivity, performance, and code quality. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2416275 Relocation Package No

Posted 1 month ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role: Sr. React + NodeJs Developer Experience: 9+ Years Location: Gurugram | Bangalore | Pune | Hyderabad | Noida Notice: Immediate Joiners Job Description: Key Responsibilities: - Track Lead full-stack engineering teams (frontend and backend). - Mentor team members and enforce engineering best practices. Primary Skills: - Frontend: React.js (JavaScript/ TypeScript) - Backend: Node.js (REST APIs) - Cloud/Platforms: AWS Lambda, API Gateway, SNS, SQS, Cognito, S3, CloudFront etc. - Database: PostgreSQL, MySQL, DynamoDB - Design: DDD, SOLID, GoF Design Patterns Good-to-Have: - Testing: Jest, Mocha, Jasmine, Postman, Playwright - DevOps: CloudFormation, Pulumi, CodeCommit/Build/Deploy, GitHub Actions - Monitoring: CloudWatch, X-Ray

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience . Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 4 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Must have skills: Azure data factory, Azure data bricks, Python and Pyspark Expert with database technologies and ETL tools. Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Python, Pyspark etc. Good knowledge of AZURE, AWS, GCP Cloud platform services stack Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Delta lake, Databricks workflows orchestration, Python, Pyspark etc. Good Knowledge on Unity Catalog implementation. Good Knowledge on integration with other tools like – DBT, other transformation tools. Good knowledge on Unity Catalog integration with Snowlflake Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 30, 2025, 6:37:59 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 30, 2025, 7:33:07 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Lead Developer! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Overall <<>>> years of experience in IT Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 1 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 30, 2025, 6:54:44 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

8.0 - 12.0 years

6 - 8 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

6.0 years

30 Lacs

India

Remote

Vacancy with a company focused on digital transformation, specializing in intelligent automation, digitalization, data science & analytics, and mobile enablement. They help businesses improve cost efficiency, productivity, and agility by reducing turnaround time and errors. The company provides services and solutions including operations digital transformation consulting, next-gen shared services setup consulting, cognitive RPA deployment, and AI-enabled CX enhancement. Founded in 2020 ;with HQ in Gurugram, India; the Company is now operating from Noida, Mumbai, Hyderabad, and Bengaluru as well. Job Role: 6+ years of experience in Data Engineering on the AWS Stack, PySpark and DBT. Programming experience with advanced Python, SQL . Experience with Kubernetes and Docker . Hands on experience with AWS services such as Cloud Formation, S3, Lambda, Step Functions, IAM, KMS, Athena, Glue, Glue Data Brew, EMR/Spark, Data Sync, Event Bridge, EC2, SQS, SNS, Lake Formation, Cloud Watch, Cloud Trail. Good experience building Real-Time streaming data pipelines with Kafka, Kinesis, etc. Proven competency in building solutions on data lake/data warehouse. experience in Big Data, PySpark, and AWS development, specializing in application development, enhancement, and service delivery. ● Development of high volume and low latency applications and delivering high-availability and performance. ● Efficient in analysis and understanding of the business environment. ● Experienced in development of AWS Infrastructure. ● Experienced in spark and with excellent knowledge and its functioning.  Data Modeling Tools: Erwin, ER studio, MySQL Workbench, Hackolade, Oracle data modeler Job Types: Full-time, Permanent Pay: Up to ₹3,000,000.00 per year Benefits: Cell phone reimbursement Internet reimbursement Life insurance Paid sick time Paid time off Work from home Work Location: In person

Posted 1 month ago

Apply

8.0 - 12.0 years

6 - 9 Lacs

Gurgaon

On-site

About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

6.0 years

3 - 15 Lacs

Noida

On-site

Lead Full-Stack Engineer Experience: 6+ years (3–4 enterprise applications deployed) Role Summary We are building a large-scale, real-time video learning platform powered by AI module and playback resumption As the Lead Full-Stack Engineer, you will define and lead architecture, mentor engineers, ensure quality delivery, and own critical components across React, FastAPI, Django, PostgreSQL, Redis, and AWS. Tech Stack Frontend: React (Hooks, Context, Router)Backend: FastAPI (async), Django (REST, ORM)Database: PostgreSQL, RedisInfra: Docker, AWS (ECS, RDS, CloudFront, S3, Secrets Manager) CI/CD: GitHub Actions Responsibilities Architect and lead full-stack development of scalable web applications Guide the team through backend (FastAPI/Django) and frontend (React) best practices Design and manage APIs (REST and WebSocket)Optimize and maintain scalable PostgreSQL schemas and Redis caching Ensure end-to-end CI/CD pipelines using Docker + GitHub Actions Manage secure, reliable deployments on AWS (ECS, S3, CloudFront, etc.)Perform code reviews, enforce clean architecture, and mentor junior engineers Collaborate with product, design, and ML teams to deliver key product goals Hands on Required Skills 6+ years of full-stack experience with React + Python backend (Django/FastAPI) Successfully deployed 3+ enterprise-grade products in production Deep understanding of PostgreSQL optimization and RedisProficient in Docker, GitHub Actions, and AWS-based deployment pipelines Experience with JWT-based auth, session management, and role-based access Experience with microservices and real-time communication (WebSocket, queues)Bonus Skills Redis Streams, Kafka, or event-driven design AWS MSK or SQS/Kinesis Previous experience with video platforms or ML/data-heavy apps Preferred Immediate joiner Job Types: Full-time, Permanent Pay: ₹332,308.96 - ₹1,526,199.02 per year Benefits: Health insurance Schedule: Day shift Fixed shift Morning shift Experience: Django: 8 years (Preferred) PostgreSQL: 9 years (Preferred) AWS: 8 years (Preferred) CI/CD: 8 years (Preferred) Docker: 9 years (Preferred) Python: 9 years (Preferred) Kafka: 8 years (Preferred) GitHub: 8 years (Preferred) Work Location: In person Application Deadline: 03/07/2025 Expected Start Date: 02/07/2025

Posted 1 month ago

Apply

8.0 - 12.0 years

6 - 9 Lacs

Ahmedabad

On-site

About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies