Jobs
Interviews

19594 Nosql Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 4.0 years

0 Lacs

jaipur, rajasthan

On-site

As an intern at Softsensor.ai, you will be responsible for working on building high-quality, innovative, and fully performing web applications in compliance with coding standards and technical design. You will utilize your knowledge of the MERN stack (MongoDB, Express.js, React.js, Node.js) and leverage your expertise in HTML, CSS, JavaScript, SQL, and NoSQL data stores to enhance our products. Your tasks will involve developing high-quality software design and architecture based on the MERN stack, identifying, prioritizing, and executing tasks in the software development life cycle, and building and enhancing web applications using various technologies. Your role will also include developing tools and applications by producing clean, efficient code, automating tasks through appropriate tools and scripting, collaborating with internal teams and vendors to fix and improve products, documenting development phases, and monitoring systems. Additionally, you will ensure that the software is up-to-date with the latest technologies and participate in the agile development processes and continuous integration framework of the company. Softsensor.ai is a USA and India-based corporation that focuses on delivering outcomes to clients using data. The company's expertise lies in a collection of people, methods, and accelerators to rapidly deploy solutions for clients. The principals of the company have significant experience with leading global consulting firms and corporations, delivering large-scale solutions. Softsensor.ai is specifically focused on data science and analytics for improving process and organizational performance, working with cutting-edge data science technologies like NLP, CNN, and RNN and applying them in a business context.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Senior Software Engineer at our company located in Ahmedabad, with over 5 years of experience, you will play a crucial role in designing and developing backend systems utilizing Java and microservices architecture. Your responsibilities include actively participating in system design, contributing to architectural decisions, and ensuring the development of high-quality services that are production-ready. Your key responsibilities will involve designing and constructing microservices using Java (Spring Boot), implementing real-time features using WebSocket, handling background and scheduled tasks with ScheduledExecutorService, and applying microservice design patterns effectively within the system architecture. You will be expected to create clean service boundaries, well-defined APIs, and ensure asynchronous communication. Additionally, your role will involve contributing to decisions on service granularity, data consistency, and fault tolerance. Your skills and experience should include strong programming abilities in Java (11 or higher), proficiency in Spring Boot and Spring Cloud Gateway for building RESTful microservices, practical knowledge of microservices design patterns, hands-on experience with Spring Data JPA and Hibernate for data persistence, familiarity with WebSocket in Java, proficiency in ScheduledExecutorService, experience with event-driven systems using Kafka, and working knowledge of RDBMS and optionally NoSQL databases. In addition, you should be well-versed in containerized environments using Docker, have an understanding of authentication and authorization principles, hands-on experience with CI/CD pipelines, and proficiency in monitoring/logging tools like Prometheus, Grafana, and ELK. A strong problem-solving mindset and experience in troubleshooting distributed systems will be essential for success in this role. You will also be required to collaborate with DevOps to ensure smooth deployment, monitoring, and observability of services, as well as mentor junior engineers and share technical insights within the team. If you are a passionate Senior Software Engineer with a deep understanding of distributed systems and the ability to translate business needs into scalable service components, we encourage you to apply for this challenging and rewarding opportunity.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You should have at least 6 years of experience in designing and developing enterprise and/or consumer-facing applications using technologies like JavaScript, Node.js, ReactJS, Angular, SCSS, CSS, and React Native. Additionally, you should have 3+ years of experience in leading teams and taking responsibilities to ensure timely delivery. It is essential to have hands-on experience with both SQL and NoSQL databases, as well as working in a Linux OS environment. Strong debugging and problem-solving skills are crucial for this role. Moreover, you should be proficient in developing responsive web applications and possess excellent communication skills to interact effectively with customers. An eagerness to learn alternative technologies as required is highly valued. Familiarity with the product development lifecycle, including prototyping, hardening, and testing, is also necessary. In terms of additional skills/experience, working knowledge of Python and NoSQL databases like MongoDB and Cassandra is advantageous. Participation in product functional and user experience designs, as well as experience in AI, ML, NLP, and Predictive Analytics domains, is a plus. Familiarity with i18n, the latest UI/UX design trends, and experience with CI/CD, Jenkins, and Nginx would be beneficial for this role.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The Site Reliability Engineer (SRE) plays a crucial role in ensuring the availability, performance, and scalability of critical systems. Your responsibilities include managing CI/CD pipelines, monitoring production environments, automating operations, and collaborating with development and infrastructure teams to enhance platform reliability. Key Responsibilities - Manage alerts and monitoring of critical production systems. - Enhance CI/CD pipelines and deployment strategies. - Collaborate with central platform teams on reliability initiatives. - Automate testing, regression, and operational tooling for efficiency. - Conduct NFR testing on production systems. - Implement Debian version migrations with minimal disruption. Required Qualifications & Skills - Proficiency in CI/CD tools like Jenkins, Docker, JFrog. - Experience in Debian OS migration and upgrades. - Knowledge of monitoring tools such as Grafana and Nagios. - Familiarity with configuration management tools like Ansible, Puppet, or Chef. - Working knowledge of Git and version control systems. - Deep understanding of Kubernetes architecture and deployment pipelines. - Proficiency in networking protocols and tools like TCP/IP, UDP, Wireshark. - Strong skills in Linux, scripting, and databases like MySQL and NoSQL. Soft Skills - Strong problem-solving and analytical abilities. - Effective communication and collaboration with cross-functional teams. - Ownership mindset, accountability, and adaptability to dynamic environments. - Detail-oriented approach and proactive problem-solving. Preferred Qualifications - Bachelor's degree in Computer Science or related field. - Certifications in Kubernetes, Linux, or DevOps practices. - Experience with cloud platforms like AWS, GCP, or Azure. - Exposure to service mesh, observability stacks, or SRE toolkits. Key Relationships Internal: DevOps, Infrastructure, Software Development, QA, Security Teams External: Tool vendors, platform service providers Role Dimensions - Impact on uptime and reliability of business-critical services. - Ownership of CI/CD and deployment processes. - Contribution to cross-team reliability and scalability initiatives. Success Measures (KPIs) - System uptime and availability (SLA adherence). - Incident response metrics (MTTD, MTTR). - Deployment success rate and automation coverage. - Completion of OS migration and infrastructure upgrade projects. Competency Framework Alignment - Technical Mastery: Infrastructure, automation, CI/CD, Kubernetes, monitoring. - Execution Excellence: Timely project delivery, process improvements. - Collaboration: Cross-functional team engagement and support. - Resilience: Problem solving under pressure, incident response. - Innovation: Continuous improvement of operational reliability and performance.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a Cloud Engineer at PAC Panasonic Avionics Corporation in Pune, India, you will have the exciting opportunity to modernize our legacy SOAP-based Airline Gateway (AGW) by building a cloud-native, scalable, and traceable architecture using AWS, Python, and DevOps practices. Your role will involve migrating from legacy SOAP APIs to modern REST APIs, implementing CI/CD pipelines, containerization, and automation processes to enhance system performance, reliability, and maintainability. You will play a crucial part in backend development, networking, and cloud-based solutions, contributing to scalable and efficient applications. Your responsibilities will include designing, building, and deploying cloud-native solutions on AWS, developing and maintaining backend services and web applications using Python, implementing CI/CD pipelines, automation, and containerization with tools like Docker, Kubernetes, and Terraform. You will utilize Python for backend development, ensuring scalability, security, and high availability of cloud systems while adhering to AWS best practices. Monitoring and logging solutions for real-time observability, system traceability, and performance tracking will also be part of your role. You will work closely with cross-functional teams to integrate cloud-based solutions, ensure alignment with cloud security and compliance standards, and actively contribute to performance improvements and infrastructure optimization. Your skills and qualifications should include experience with AWS cloud services, strong backend development experience with Python, proficiency in building and maintaining web applications and backend services, and solid understanding of Python web frameworks like Flask, Django, or FastAPI. Experience with database integration, DevOps tools, RESTful API design, cloud security, monitoring tools, and cloud infrastructure management are also essential. If you have a passion for cloud engineering, a strong background in Python development, and the ability to deliver scalable solutions to business challenges, this role is perfect for you. Join our team and be part of the exciting transformation towards cloud-native solutions in the airline industry. Experience Range: 3 to 5 years Preferred Skills: - Experience with airline industry systems or understanding of airline-specific technologies - AWS certifications, especially AWS Certified Solutions Architect, DevOps Engineer, or Developer - Familiarity with serverless architectures (AWS Lambda, API Gateway) and microservices - Strong problem-solving skills and ability to analyze complex technical issues for tailored solutions,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will have the opportunity to work alongside top engineers to build and support the backend stack for a world-class mobile gaming software. You will be responsible for designing, developing, and maintaining new and existing functionality for server-side APIs and services using REST/GraphQL. Your role will involve participating in the development lifecycle to solve technical problems of varying scopes and complexity levels. Additionally, you will analyze and research new technologies to provide solutions that can be implemented across the product. Bringing your experience and creativity to enhance code quality, security, and performance will be a key aspect of your responsibilities. You will also mentor other developers by engaging in code reviews and offering feedback and suggestions. Integration of third-party APIs and managing product integrations with digital apps and startups will also be part of your duties. The ideal candidate should have 5+ years of software development experience in TypeScript/JavaScript/NodeJs and at least 3 years of experience working with serverless architecture. Proficiency in AWS services or other SaaS cloud providers is essential, along with a solid understanding of SQL/NoSQL databases such as MySQL and Redis. Experience with version control tools like git and GitHub is also required. Joining our team will expose you to a startup environment where you will collaborate with a passionate and energetic team that is driven by hustle, camaraderie, and togetherness. Our culture fosters innovation, giving each employee the freedom to create, innovate, and contribute to a great learning work environment and career trajectory. From medical insurance to a supportive work culture, a collaborative work environment, and engaging learning and development sessions, we provide a comprehensive package to support you in your professional journey. We are excited to hear from you and welcome the opportunity to have you join our team!,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

andhra pradesh

On-site

As a Senior Full Stack Developer based in Visakhapatnam, Andhra Pradesh, you will be responsible for designing, developing, and maintaining scalable web applications using modern frontend and backend technologies. With over 5 years of experience in full stack development, you will leverage your expertise to collaborate with cross-functional teams in defining, designing, and implementing new features. Your proven track record in both desktop and mobile application development will be essential as you write clean, scalable, and efficient code for frontend and backend systems. Additionally, your experience in optimizing database queries and structures for performance will contribute to the high performance and reliability of the applications. You will play a key role in the entire product development lifecycle, from design to deployment and maintenance of new and existing features. Participation in code reviews, troubleshooting, and debugging applications will ensure code quality and optimize performance. Moreover, your familiarity with Agile development practices will enable effective project delivery and team collaboration. In terms of technical skills, you should be proficient in a range of frontend technologies such as React.js, Redux, JavaScript, TypeScript, HTML5, CSS3, and Tailwind CSS. Your expertise in backend technologies like Node.js, Python, Java, or Ruby on Rails, along with database management using SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB), will be crucial for successful project delivery. Moreover, your knowledge of API development (RESTful and GraphQL), version control (Git and GitHub), DevOps practices (CI/CD, Docker, cloud services), testing tools (Jest, react testing library), and build tools (Webpack, Parcel) will be highly beneficial. Strong problem-solving skills, analytical thinking, communication abilities, teamwork, and project management skills are essential soft skills for this role. Ideally, you should hold a Bachelor's degree in Computer Science, Engineering, or a related field, or possess equivalent experience. Additional qualifications such as relevant certifications (e.g., AWS Certified Solutions Architect), experience with UI/UX design principles, knowledge of web security best practices, and a commitment to continuous learning and staying updated with the latest technologies will be advantageous. If you are passionate about staying current with emerging technologies and industry trends, mentoring junior developers, and contributing to the overall growth of the development team, this position offers an exciting opportunity to make a significant impact. The open date for this position is Mar-25-2025.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Java Microservices Developer at Deutsche Bank Group, you will be responsible for designing and developing microservices applications using Java in a cloud environment. Your role will involve implementing RESTful APIs, integrating with other services, and ensuring high performance and scalability of the applications. You will collaborate with cross-functional teams to gather requirements, troubleshoot, debug, and upgrade existing systems. Additionally, you will participate in code reviews, write unit and integration tests, and document development processes. The ideal candidate should have a minimum of 3-5 years of experience in Java, Spring Boot, Microservices, RESTful APIs, Docker, Kubernetes, AWS, CI/CD, SQL, NoSQL, and Git. You should have proven experience as a Java Developer with a focus on microservices, a strong understanding of microservices architecture and design patterns, and experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with containerization tools like Docker and Kubernetes, experience with CI/CD pipelines, and excellent problem-solving and analytical skills are also required. Deutsche Bank Group offers a range of benefits including a best in class leave policy, gender-neutral parental leaves, sponsorship for industry relevant certifications and education, comprehensive insurance coverage, and a culture of continuous learning and development. As part of a collaborative and inclusive work environment, you will receive training and coaching from experts in your team to support your career progression. If you are a proactive and skilled Java Microservices Developer looking to excel in your career while contributing to agile development projects and collaborating with stakeholders, we invite you to apply and be a part of our team at Deutsche Bank Group.,

Posted 4 days ago

Apply

5.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies, with expertise in distributed computing principles. Your skills should include an expert level understanding of Apache Spark and hands-on programming with Python. Proficiency in Hadoop v2, Map Reduce, HDFS, and Sqoop is required. Experience in building stream-processing systems using technologies like Apache Storm or Spark-Streaming, as well as working with messaging systems such as Kafka or RabbitMQ, will be beneficial. A good understanding of Big Data querying tools like Hive and Impala, along with integration of data from multiple sources including RDBMS, ERP, and Files, is necessary. You should possess knowledge of SQL queries, joins, stored procedures, and relational schemas. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is expected. Performance tuning of Spark Jobs and familiarity with native Cloud data services like AWS or AZURE Databricks is essential. The role requires the ability to efficiently lead a team, design and implement Big data solutions, and work as a practitioner of AGILE methodology. This position falls under the category of Data Engineer and is suitable for individuals with expertise in ML/AI Engineers, Data Scientists, and Software Engineers.,

Posted 4 days ago

Apply

170.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary Strong hands on developer / devops for Credit Grading Hive in PE. This is for a #1 priority for the FFG program and a strong engineering talent is required to drive the rebuild of CreditMate legacy platform. The skillset requires is to complete overhaul and develop an inhouse solution in latest technology stack The person will be part of the team developing new CreditMate aligned with CC wide Unified UI / UX strategy. Key Responsibilities Strategy Advice future technology capabilities and architecture design considering business objectives, technology strategy, trends and regulatory requirements Awareness and understanding of the Group’s business strategy and model appropriate to the role. Business Awareness and understanding of the wider business, economic and market environment in which the Group operates. Understand and Recommend business flows and translate them to API Ecosystem Processes Responsible for executing and supervising microservices development to facilitate business capabilities and orchestrate to achieve business outcomes People & Talent Lead through example and build the appropriate culture and values. Set appropriate tone and expectations from their team and work in collaboration with risk and control partners. Ensure the provision of ongoing training and development of people, and ensure that holders of all critical functions are suitably skilled and qualified for their roles ensuring that they have effective supervision in place to mitigate any risks. Risk Management The ability to interpret the Portfolio Key Risks, identify key issues based on this information and put in place appropriate controls and measures Governance Awareness and understanding of the regulatory framework, in which the Group operates, and the regulatory requirements and expectations relevant to the rol Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Lead to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] Serve as a Director of the Board Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Key stakeholders Product Owners, Hive Leads, Client Coverage Tech and Biz Stakeholders Qualifications Education-Computer science it btech Certifications-Java, kubernetes, Languages-Java, quarkus, spring, sql, python Skills And Experience Participates in development of multiple or large software products and estimates and monitors development costs based on functional and technical requirements. Delivery Experience as Tech Project manager and analysis skills Contrasts advantages and drawbacks of different development languages and tools. Expertise in RDBMS solutions (Oracle, PostgreSQL) & NoSQL offerings (Cassandra, MongoDB, etc) Experience in distributed technologies e.g. Kafka, Apache MQ, RabbitMQ etc. will be added advantage Strong knowledge in application integration using Web Service (SOAP/REST/GRPC) or Messaging using JMS. About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Java Full Stack Development Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are built to the highest standards of quality and functionality. You will also participate in testing and debugging processes, contributing to the overall success of the projects you are involved in. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in continuous learning to stay updated with the latest technologies and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Java Full Stack Development. - Strong understanding of front-end technologies such as HTML, CSS, and JavaScript. - Experience with back-end frameworks like Spring or Hibernate. - Familiarity with database management systems, including SQL and NoSQL databases. - Knowledge of version control systems, particularly Git. Additional Information: - The candidate should have minimum 3 years of experience in Java Full Stack Development. - This position is based at our Bengaluru office. - A 15 years full time education is required., 15 years full time education

Posted 4 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417810 Relocation Package Yes

Posted 4 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417809 Relocation Package Yes

Posted 4 days ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. 2) Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. 6) Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. 7) Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. 8) Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417808 Relocation Package Yes

Posted 4 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Presidio, Where Teamwork and Innovation Shape the Future At Presidio, we’re at the forefront of a global technology revolution, transforming industries through cutting-edge digital solutions and next-generation AI. We empower businesses—and their customers—to achieve more through innovation, automation, and intelligent insights. The Role Presidio is looking for an Architect to design and implement complex systems and software architectures across multiple platforms. The ideal candidate will have extensive experience in systems architecture, software engineering, cloud technologies, and team leadership. You will be responsible for translating business requirements into scalable, maintainable technical solutions and guiding development teams through implementation. Responsibilities Include Design, plan, and manage cloud architectures leveraging AWS, Azure, and GCP, ensuring alignment with business objectives and industry best practices. Evaluate and recommend appropriate cloud services and emerging technologies to enhance system performance, scalability, and security. Lead the development and integration of software solutions using a variety of programming languages (Java, .NET, Python, Golang, etc.). Develop and maintain automated solutions for cloud provisioning, governance, and lifecycle management, utilizing Infrastructure as Code (IaC) tools such as Terraform and Ansible. Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver robust cloud-native solutions. Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver robust cloud-native solutions. Guide and mentor development teams, enforcing architectural standards, coding best practices, and technical excellence. Provide expert consultation to internal and external stakeholders, offering recommendations on cloud migration, modernization, and optimization strategies. Ensure compliance with security, regulatory, and cost management policies across cloud environments. Stay current with industry trends, emerging technologies, and best practices, proactively introducing innovations to the organization. Required Skills And Professional Experience 10+ years of experience in software architecture, including significant experience with cloud infrastructure and hyperscaler platforms (AWS, Azure, GCP). Deep expertise in at least one hyperscaler (AWS, Azure, or GCP), with working knowledge of the others. Strong programming skills in multiple languages (Java, C#, Node, JavaScript, .NET, Python, Golang, etc.). Experience with services/micro-services development and relational databases (Postgres, MySQL, Oracle, etc.) Expertise in open-source technologies and NoSQL/RDBMS such as Couchbase, Elasticsearch, RabbitMQ, MongoDB, Cassandra, Redis, etc. Excellent verbal and written communication skills. Knowledge in Project Management tools and Agile Methodologies. Certification in AWS or Azure is preferred. Your future at Presidio Joining Presidio means stepping into a culture of trailblazers—thinkers, builders, and collaborators—who push the boundaries of what’s possible. With our expertise in AI-driven analytics, cloud solutions, cybersecurity, and next-gen infrastructure, we enable businesses to stay ahead in an ever-evolving digital world. Here, your impact is real. Whether you're harnessing the power of Generative AI, architecting resilient digital ecosystems, or driving data-driven transformation, you’ll be part of a team that is shaping the future. Ready to innovate? Let’s redefine what’s next—together. About Presidio At Presidio, speed and quality meet technology and innovation. Presidio is a trusted ally for organizations across industries with a decades-long history of building traditional IT foundations and deep expertise in AI and automation, security, networking, digital transformation, and cloud computing. Presidio fills gaps, removes hurdles, optimizes costs, and reduces risk. Presidio’s expert technical team develops custom applications, provides managed services, enables actionable data insights and builds forward-thinking solutions that drive strategic outcomes for clients globally. For more information, visit www.presidio.com . Presidio is committed to hiring the most qualified candidates to join our amazing culture. We aim to attract and hire top talent from all backgrounds, including underrepresented and marginalized communities. We encourage women, people of color, people with disabilities, and veterans to apply for open roles at Presidio. Diversity of skills and thought is a key component to our business success. Recruitment Agencies, Please Note: Presidio does not accept unsolicited agency resumes/CVs. Do not forward resumes/CVs to our careers email address, Presidio employees or any other means. Presidio is not responsible for any fees related to unsolicited resumes/CVs.

Posted 4 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Software Engineer – Fresher Job Location: Bangalore Experience Required : 0–1 Years About the Role: We are looking for enthusiastic and talented freshers to join our cnMaestro team. As a fresher, you will give proper knowledge transfer and also hands-on experience in cnMaestro modules/features, and work with modern technologies to build scalable and robust software systems. Key Responsibilities: Learn and contribute to the development and maintenance of cnMaestro applications Work closely with senior developers and product teams to understand requirements Write clean, efficient, and well-documented code with UT Participate in code reviews, testing, and bug fixing Stay updated with the latest programming trends and technologies Qualifications: Bachelor’s degree in Computer Science, IT, Electronics, or related fields (BE/B.Tech/MCA) Strong understanding of at least one programming language: Python, NodeJS, Golang Knowledge of Data Structures, Algorithms, and OOPs concepts Good problem-solving and analytical skills Good communication and team collaboration skills Project experience in software development Good to Have (Optional): Knowledge in AWS and AI/ML Agentic frameworks Exposure to web technologies (HTML/CSS/JavaScript/Angular) Familiarity with databases (SQL/NoSQL/Cache DB/Analytics DB/Vector DB)

Posted 4 days ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

SDE‑2 Backend Engineer — OpsLyft Location: Noida, India (On-site, full-time, 5 days/week) Experience Required: 4+ years in backend development using Python and/or Go About OpsLyft OpsLyft builds cloud-native infrastructure tools to help engineering teams manage cloud systems at scale—leveraging AWS, Kubernetes, microservices, and real-time data systems. Role Overview You will design and deliver backend services and APIs, work with PostgreSQL, MongoDB, and streaming systems, and collaborate with product, frontend, DevOps, and data engineering teams. Your work will shape the architecture and technical direction of critical infrastructure systems. Responsibilities Build and maintain backend services in Python or Go Architect and deploy AWS-based microservices Manage relational and NoSQL databases ontribute to data pipelines or event-driven architecture Participate in code reviews and mentor junior engineers Engage in system and architecture design What We’re Looking For 4+ years backend engineering experience Strong expertise in Python and/or Golang Practical experience with AWS and container orchestration (Docker, Kubernetes) Proficiency with PostgreSQL and MongoDB Ability to write clean, scalable, and maintainable code Preferred: hands-on experience with streaming systems, Terraform, CI/CD pipelines, or observability tooling Application Instructions Send your resume and a brief note about why you’re interested to hr@opslyft.com. The process usually includes a technical screen, a coding/design exercise, and a final conversation with leadership. Candidates typically hear back within 1–2 weeks.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Key Responsibilities: Design and develop high-performance backend services using Java (18/21) and Spring Boot Build scalable and distributed data pipelines using Apache Spark Develop and maintain microservices-based architectures Work on cloud-native deployments, preferably on AWS (EC2, S3, EMR, Lambda, etc.) Optimize data processing systems for performance, scalability, and reliability Collaborate with data engineers, architects, and product managers to translate business requirements into technical solutions Ensure code quality through unit testing, integration testing, and code reviews Troubleshoot and resolve issues in production and non-production environments Required Skills and Experience: 5+ years of professional experience in software engineering Strong programming expertise in Core Java (18/21) Hands-on experience with Apache Spark and distributed data processing Proven experience with Spring Boot and RESTful API development Solid understanding of microservices architecture and patterns Proficiency in cloud platforms, especially AWS (preferred) Experience with SQL/NoSQL databases and data lake/storage systems Familiarity with CI/CD tools and containerization (Docker/Kubernetes is a plus) What We Offer: - We offer a market-leading salary along with a comprehensive benefits package to support your well-being. -Enjoy a hybrid or remote work setup that prioritizes work-life balance and personal well-being. -We invest in your career through continuous learning and internal growth opportunities. -Be part of a dynamic, inclusive, and vibrant workplace where your contributions are recognized and rewarded. -We believe in straightforward policies, open communication, and a supportive work environment where everyone thrives. About the Company: https://predigle.com/ https://www.espergroup.com/ Predigle, an EsperGroup company, focuses on building disruptive technology platforms to transform daily business operations. Predigle has expanded rapidly to offer various products and services. Predigle Intelligence (Pi) is a comprehensive portable AI platform that offers a low-code/no-code AI design solution for solving business problems.

Posted 4 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

JOB DESCRIPTION: The Fanatical Support for AWS team provides industry leading Fanatical Support™ to Rackspace customers as part of a global team. Rackspace is hiring AWS Cloud Engineers to deliver Fanatical Support with Amazon Web Services. Fanatical Support for AWS includes a wide range of services and features to help customers make the most out of their chosen hosting strategy. Using your deep technical expertise, you will help customers optimize their workloads by providing application focused assistance to build, deploy, integrate, scale and heal using native AWS and 3rd party tool-chains and automation oriented agile principles. Through both hands-on and consultative approaches, you will be responsible fo supporting customers with tasks including provisioning and modifying Cloud environments, performing upgrades, and addressing day-to-day customer deployment issues via phone and ticket. At Rackspace we pride ourselves on our ability to deliver fanatical experience - this means our support team blends technical expertise and strong customer oriented professional skills. Being successful in this role requires: Working knowledge of Amazon Web Services Products & Services, Relational and NoSQL Databases, Caching, Object and Block Storage, Scaling, Load Balancing, CDNs, Terraform, Networking etc Excellent working knowledge of Windows or Linux operating systems – experience of supporting and troubleshooting issues and performance Intermediate understanding of central networking concepts: VLANs, layer2/3 routing, access lists & load balancing Good understanding of design of native Cloud applications, Cloud application design patterns and practices Hands on knowledge using CloudFormation and/or Terraform JOB REQUIREMENTS: Key Accountabilities Build, operate and support AWS Cloud environments Assist customers in the configuration of backup, patching and monitoring of servers and services Build customer solutions, leveraging automation and delivery mechanisms for efficiency and scalability Respond to customer support requests via tickets and phone calls within response time SLAs Ticket Queue Management and Ticket triaging – escalating to senior engineers when required Troubleshoot performance degradation or loss of service as time critical incidents as needed Drive strong customer satisfaction (NPS) through Fanatical Support Ownership of issues, including collaboration with other teams and escalation Support the success and development of others in the team Key Performance Indicators: Customer Satisfaction scores - NPS Speed to online- Meeting required delivery times Performance indicators – Ticket queues, response times Quality indicators – Peer review, customer feedback PERSON SPECIFICATION: Technical achiever with a strong work ethic, creative, collaborative, team player A strong background in AWS and/or demonstrative hosting-specific technical skills: Compute and Networking Storage and Content Delivery Database Administration and Security Deployment and Management Application Services Analytics Mobile Services CloudFormation/Terraform

Posted 4 days ago

Apply

7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Title: Manager – Senior ML Engineer (Full Stack) About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Job Summary : The Manager – Senior ML Engineer (Full Stack) will be responsible for leading the development and integration of Generative AI (GenAI) technologies, writing code modules, and managing full-stack development projects. The ideal candidate will have a strong background in Python and a proven track record in machine learning and full-stack development. Required Skills Strong proficiency in Python programming. Experience with data analysis and visualization libraries like Pandas, NumPy, Matplotlib, and Seaborn. Proven experience in machine learning and AI development. Experience with Generative AI (GenAI) development and integration. Full-stack development experience, including front-end and back-end technologies. Proficiency in web development frameworks such as Django or Flask. Knowledge of machine learning frameworks such as TensorFlow, Keras, PyTorch, or Scikit-learn. Experience with RESTful APIs and web services integration. Familiarity with SQL and NoSQL databases, such as PostgreSQL, MySQL, MongoDB, or Redis. Experience with cloud platforms like AWS, Azure, or Google Cloud. Knowledge of DevOps practices and tools like Docker, Kubernetes, Jenkins, and Git. Proficiency in writing unit tests and using debugging tools. Effective communication and interpersonal skills. Ability to work in a fast-paced, dynamic environment. Knowledge of software development best practices and methodologies. Key Responsibilities Lead the development and integration of Generative AI (GenAI) technologies to enhance our product offerings. Write, review, and maintain code modules, ensuring high-quality and efficient code. Oversee full-stack development projects, ensuring seamless integration and optimal performance. Collaborate with cross-functional teams to define project requirements, scope, and deliverables. Manage and mentor a team of developers and engineers, providing guidance and support to achieve project goals. Stay updated with the latest industry trends and technologies to drive innovation within the team. Ensure compliance with best practices in software development, security, and data privacy. Troubleshoot and resolve technical issues in a timely manner. Qualifications Bachelor’s degree in computer science or an Engineering degree Minimum of 7 years of experience in machine learning engineering or a similar role. Demonstrated experience in managing technology projects from inception to completion.

Posted 4 days ago

Apply

10.0 years

0 Lacs

India

On-site

Experience - 10+ Years Skills - Java + React + Docker + Kubernetes + RDBMS + NOSQL + Cloud (Azure/AWS) JD: We are seeking a highly skilled and experienced Tech Lead (Full-Stack) to join our dynamic and innovative team. As a Tech Lead (Full-Stack), you will be responsible for designing, developing, and implementing software solutions that enhance our products and services. Your expertise in front-end frameworks like Angular, API development using Java and Spring Boot, and experience with MongoDB will be critical to the success of our projects. Responsibilities: Are you an experienced Full Stack Engineer with a strong background in front-end frameworks, Java development, and microservices architecture? Are you passionate about leading agile Scrum teams, mentoring junior developers, and fostering a collaborative work environment? If so, we have an exciting opportunity for you! As a Tech Lead (Full-Stack) at CNHi, you will play a pivotal role in designing, developing, and maintaining our innovative software applications. Leveraging your expertise in Angular for front-end development, Java and Spring Boot for building APIs and microservices, and MongoDB for data management, you will contribute to the success of our projects. Your leadership skills and experience in guiding agile Scrum teams will be instrumental in driving efficient development processes and delivering exceptional solutions. Responsibilities: Lead agile Scrum teams, facilitating effective sprint planning, daily stand-ups, and retrospectives to achieve project objectives. Mentor and coach junior team members, fostering their professional growth and nurturing a collaborative work culture. Serve as a Scrum Master or Agile Coach, promoting agile principles and practices to optimize team productivity. Collaborate effectively with global stakeholders, understanding their requirements and providing valuable technical insights. Design and develop robust and user-friendly web applications using Angular and other modern front-end frameworks. Create scalable and reliable APIs and microservices using Java and Spring Boot, ensuring high performance and quality. Utilize MongoDB for efficient data storage and management, adhering to best practices. Demonstrate proficiency in Kubernetes, Docker, and container orchestration, enabling seamless deployment and scalability. Conduct thorough code reviews, ensuring adherence to best practices, design patterns, and SOLID principles. Identify and address technical challenges, proposing innovative solutions to improve application performance and reliability. Work closely with DevOps and infrastructure teams to streamline the deployment and monitoring processes. Stay updated with industry trends and emerging technologies, bringing new ideas and best practices to the team. Requirements: BTech Computer Science, MCA, or equivalent qualification. Demonstrated ability to lead agile Scrum teams effectively and mentor junior developers. Experience functioning as a Scrum Master or Agile Coach is highly desirable. Proven experience in developing web applications using Angular or similar front-end frameworks. Extensive hands-on experience in Java development and building APIs and microservices with Spring Boot. Solid understanding of MongoDB and database design principles. Proficiency in Kubernetes, Docker, and container orchestration for application deployment. Strong knowledge of code reviews, design patterns, and SOLID principles. Excellent communication and interpersonal skills to work collaboratively with global stakeholders. Proactive problem-solving skills and a passion for delivering high-quality software solutions. Ability to adapt quickly to changing requirements and prioritize tasks effectively NOTE: Staffing & Recruitment Companies are advised not to contact us.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 10 The Role: Foreseer AI – Senior Engineer The Team: The Foreseer team delivers digital transformation solutions at EDO for information extraction from structures and semi structured documents and websites. Foreseer is a human in the loop platform that combines latest AI/ML advances with a state of the art UI for delivering multiple projects, all powered by a core distributed, cloud native, auto scalable framework. Team comprises of experts in Java and Python language and ML engineers. Responsibilities Include Support and foster a quality-first, agile culture that is built on partnership, trust and sharing Design, develop and maintain functionalities to create new solutions on the platform. Learning and understanding all aspects of the framework and the project deliverables. Be technically deep and able to write high quality code using industry best practices. Be responsible for implementation of new features and iterations of your project. Implement security measures and compliance standards to protect sensitive data and ensure adherence to industry regulations. Ensure the use of standards, governance and best practices in the industry to deliver high quality scalable solutions. Strategic thinker and influencer with demonstrated technical and business acumen and problem-solving skills. Experience & Qualifications BS or MS degree in Computer Science or Information Technology or equivalent. 6+ years hands on experience with Java ,J2EE and related frameworks and technologies (Spring, Restful services, Spring Boot, Spring JPA, Spring Security, MVC etc.). 2+ years of experience with designing and building microservices based distributed systems in serverless environment (containers platforms). 2+ years of experience in Active MQ, Distributed streaming platform or other related JMS providers. Proficient with Data structures and Algorithms. Experience in different data base technologies (like RDBMS, NOSQL) Experience in Containerization, Container management platforms Cloud platforms, CI/CD, and deployments through CI/CD pipelines, and AWS services like S3, EKS, EC2 etc. Proficiency in the development environment, including IDE, web & application server, GIT, Azure DevOps, unit-testing tool and defect management tools Nice To Have Skills Distributed systems programming. AI/ML solutions architecture. Knowledge of GenAI Platforms and tech stacks. Hands on experience with Elastic/Redis search. Hands on experience in Python Hands on experience in Vaadin Framework What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312747 Posted On: 2025-07-31 Location: Gurgaon, Haryana, India

Posted 4 days ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Role Description This is a full-time, on-site role located in Kolkata for a Java Fullstack Developer with 8+ years of experience. The Java Fullstack Developer will be responsible for designing and implementing web applications using Java technologies, front-end and back-end development, collaborating with cross-functional teams, and maintaining existing codebases. The role also includes tasks such as writing unit tests, troubleshooting and debugging applications, and ensuring performance optimization of the applications. Qualifications Strong experience in Java, Spring, and Hibernate Proficiency in front-end technologies such as HTML, CSS, JavaScript, Angular, or React Experience with database technologies such as SQL, NoSQL, and ORM Familiarity with DevOps practices and tools such as Docker, Jenkins, and Kubernetes Knowledge and experience in cloud platforms such as AWS, Azure, or GCP Strong problem-solving skills and the ability to troubleshoot and debug applications Excellent communication and collaboration skills Bachelor's degree in Computer Science, Engineering, or related field Experience in the media industry is a plus

Posted 4 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 5 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking highly skilled and motivated AI Engineers with Strong Python experience and familiar with prompt engineering and LLM integrations to join our Innovations Team. The team is responsible for exploring emerging technologies, building proof-of-concept (PoC) applications, and delivering cutting-edge AI/ML solutions that drive strategic transformation and operational efficiency. About the Role As a core member of the Innovations Team, you will work on AI-powered products, rapid prototyping, and intelligent automation initiatives across domains such as mortgage tech, document intelligence, and generative AI. Responsibilities Design, develop, and deploy scalable AI/ML solutions and prototypes. Build data pipelines, clean datasets, and engineer features for training models. Apply deep learning, NLP, and classical ML techniques. Integrate AI models into backend services using Python (e.g., FastAPI, Flask). Collaborate with cross-functional teams (e.g., UI/UX, DevOps, product managers). Evaluate and experiment with emerging open-source models (e.g., LLaMA, Mistral, GPT). Stay current with advancements in AI/ML and suggest opportunities for innovation. Qualifications Educational Qualification: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Certifications in AI/ML or cloud platforms (Azure ML, TensorFlow Developer, etc.) are a plus. Required Skills Technical Skills: Programming Languages: Python (strong proficiency), experience with NumPy, Pandas, Scikit-learn. AI/ML Frameworks: TensorFlow, PyTorch, HuggingFace Transformers, OpenCV (nice to have). NLP & LLMs: Experience with language models, embeddings, fine-tuning, and vector search. Prompt Engineering: Experience designing and optimizing prompts for LLMs (e.g., GPT, Claude, LLaMA) for various tasks such as summarization, Q&A, document extraction, and multi-agent orchestration. Backend Development: FastAPI or Flask for model deployment and REST APIs. Data Handling: Experience in data preprocessing, feature engineering, and handling large datasets. Version Control: Git and GitHub. Database Experience: SQL and NoSQL databases; vector DBs like FAISS, ChromaDB, or Qdrant preferred. Nice to Have (Optional): Experience with Docker, Kubernetes, or cloud environments (Azure, AWS). Familiarity with LangChain, LlamaIndex, or multi-agent frameworks (CrewAI, AutoGen). Soft Skills: Strong problem-solving and analytical thinking. Eagerness to experiment and explore new technologies. Excellent communication and teamwork skills. Ability to work independently in a fast-paced, dynamic environment. Innovation mindset with a focus on rapid prototyping and proof-of-concepts. Experience Level: 3–7 years, only Work from Office (Chennai location)

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies