Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417809 Relocation Package Yes
Posted 1 day ago
4.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. 2) Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. 6) Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. 7) Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. 8) Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417808 Relocation Package Yes
Posted 1 day ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Presidio, Where Teamwork and Innovation Shape the Future At Presidio, we’re at the forefront of a global technology revolution, transforming industries through cutting-edge digital solutions and next-generation AI. We empower businesses—and their customers—to achieve more through innovation, automation, and intelligent insights. The Role Presidio is looking for an Architect to design and implement complex systems and software architectures across multiple platforms. The ideal candidate will have extensive experience in systems architecture, software engineering, cloud technologies, and team leadership. You will be responsible for translating business requirements into scalable, maintainable technical solutions and guiding development teams through implementation. Responsibilities Include Design, plan, and manage cloud architectures leveraging AWS, Azure, and GCP, ensuring alignment with business objectives and industry best practices. Evaluate and recommend appropriate cloud services and emerging technologies to enhance system performance, scalability, and security. Lead the development and integration of software solutions using a variety of programming languages (Java, .NET, Python, Golang, etc.). Develop and maintain automated solutions for cloud provisioning, governance, and lifecycle management, utilizing Infrastructure as Code (IaC) tools such as Terraform and Ansible. Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver robust cloud-native solutions. Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver robust cloud-native solutions. Guide and mentor development teams, enforcing architectural standards, coding best practices, and technical excellence. Provide expert consultation to internal and external stakeholders, offering recommendations on cloud migration, modernization, and optimization strategies. Ensure compliance with security, regulatory, and cost management policies across cloud environments. Stay current with industry trends, emerging technologies, and best practices, proactively introducing innovations to the organization. Required Skills And Professional Experience 10+ years of experience in software architecture, including significant experience with cloud infrastructure and hyperscaler platforms (AWS, Azure, GCP). Deep expertise in at least one hyperscaler (AWS, Azure, or GCP), with working knowledge of the others. Strong programming skills in multiple languages (Java, C#, Node, JavaScript, .NET, Python, Golang, etc.). Experience with services/micro-services development and relational databases (Postgres, MySQL, Oracle, etc.) Expertise in open-source technologies and NoSQL/RDBMS such as Couchbase, Elasticsearch, RabbitMQ, MongoDB, Cassandra, Redis, etc. Excellent verbal and written communication skills. Knowledge in Project Management tools and Agile Methodologies. Certification in AWS or Azure is preferred. Your future at Presidio Joining Presidio means stepping into a culture of trailblazers—thinkers, builders, and collaborators—who push the boundaries of what’s possible. With our expertise in AI-driven analytics, cloud solutions, cybersecurity, and next-gen infrastructure, we enable businesses to stay ahead in an ever-evolving digital world. Here, your impact is real. Whether you're harnessing the power of Generative AI, architecting resilient digital ecosystems, or driving data-driven transformation, you’ll be part of a team that is shaping the future. Ready to innovate? Let’s redefine what’s next—together. About Presidio At Presidio, speed and quality meet technology and innovation. Presidio is a trusted ally for organizations across industries with a decades-long history of building traditional IT foundations and deep expertise in AI and automation, security, networking, digital transformation, and cloud computing. Presidio fills gaps, removes hurdles, optimizes costs, and reduces risk. Presidio’s expert technical team develops custom applications, provides managed services, enables actionable data insights and builds forward-thinking solutions that drive strategic outcomes for clients globally. For more information, visit www.presidio.com . Presidio is committed to hiring the most qualified candidates to join our amazing culture. We aim to attract and hire top talent from all backgrounds, including underrepresented and marginalized communities. We encourage women, people of color, people with disabilities, and veterans to apply for open roles at Presidio. Diversity of skills and thought is a key component to our business success. Recruitment Agencies, Please Note: Presidio does not accept unsolicited agency resumes/CVs. Do not forward resumes/CVs to our careers email address, Presidio employees or any other means. Presidio is not responsible for any fees related to unsolicited resumes/CVs.
Posted 1 day ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Software Engineer – Fresher Job Location: Bangalore Experience Required : 0–1 Years About the Role: We are looking for enthusiastic and talented freshers to join our cnMaestro team. As a fresher, you will give proper knowledge transfer and also hands-on experience in cnMaestro modules/features, and work with modern technologies to build scalable and robust software systems. Key Responsibilities: Learn and contribute to the development and maintenance of cnMaestro applications Work closely with senior developers and product teams to understand requirements Write clean, efficient, and well-documented code with UT Participate in code reviews, testing, and bug fixing Stay updated with the latest programming trends and technologies Qualifications: Bachelor’s degree in Computer Science, IT, Electronics, or related fields (BE/B.Tech/MCA) Strong understanding of at least one programming language: Python, NodeJS, Golang Knowledge of Data Structures, Algorithms, and OOPs concepts Good problem-solving and analytical skills Good communication and team collaboration skills Project experience in software development Good to Have (Optional): Knowledge in AWS and AI/ML Agentic frameworks Exposure to web technologies (HTML/CSS/JavaScript/Angular) Familiarity with databases (SQL/NoSQL/Cache DB/Analytics DB/Vector DB)
Posted 1 day ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
SDE‑2 Backend Engineer — OpsLyft Location: Noida, India (On-site, full-time, 5 days/week) Experience Required: 4+ years in backend development using Python and/or Go About OpsLyft OpsLyft builds cloud-native infrastructure tools to help engineering teams manage cloud systems at scale—leveraging AWS, Kubernetes, microservices, and real-time data systems. Role Overview You will design and deliver backend services and APIs, work with PostgreSQL, MongoDB, and streaming systems, and collaborate with product, frontend, DevOps, and data engineering teams. Your work will shape the architecture and technical direction of critical infrastructure systems. Responsibilities Build and maintain backend services in Python or Go Architect and deploy AWS-based microservices Manage relational and NoSQL databases ontribute to data pipelines or event-driven architecture Participate in code reviews and mentor junior engineers Engage in system and architecture design What We’re Looking For 4+ years backend engineering experience Strong expertise in Python and/or Golang Practical experience with AWS and container orchestration (Docker, Kubernetes) Proficiency with PostgreSQL and MongoDB Ability to write clean, scalable, and maintainable code Preferred: hands-on experience with streaming systems, Terraform, CI/CD pipelines, or observability tooling Application Instructions Send your resume and a brief note about why you’re interested to hr@opslyft.com. The process usually includes a technical screen, a coding/design exercise, and a final conversation with leadership. Candidates typically hear back within 1–2 weeks.
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Key Responsibilities: Design and develop high-performance backend services using Java (18/21) and Spring Boot Build scalable and distributed data pipelines using Apache Spark Develop and maintain microservices-based architectures Work on cloud-native deployments, preferably on AWS (EC2, S3, EMR, Lambda, etc.) Optimize data processing systems for performance, scalability, and reliability Collaborate with data engineers, architects, and product managers to translate business requirements into technical solutions Ensure code quality through unit testing, integration testing, and code reviews Troubleshoot and resolve issues in production and non-production environments Required Skills and Experience: 5+ years of professional experience in software engineering Strong programming expertise in Core Java (18/21) Hands-on experience with Apache Spark and distributed data processing Proven experience with Spring Boot and RESTful API development Solid understanding of microservices architecture and patterns Proficiency in cloud platforms, especially AWS (preferred) Experience with SQL/NoSQL databases and data lake/storage systems Familiarity with CI/CD tools and containerization (Docker/Kubernetes is a plus) What We Offer: - We offer a market-leading salary along with a comprehensive benefits package to support your well-being. -Enjoy a hybrid or remote work setup that prioritizes work-life balance and personal well-being. -We invest in your career through continuous learning and internal growth opportunities. -Be part of a dynamic, inclusive, and vibrant workplace where your contributions are recognized and rewarded. -We believe in straightforward policies, open communication, and a supportive work environment where everyone thrives. About the Company: https://predigle.com/ https://www.espergroup.com/ Predigle, an EsperGroup company, focuses on building disruptive technology platforms to transform daily business operations. Predigle has expanded rapidly to offer various products and services. Predigle Intelligence (Pi) is a comprehensive portable AI platform that offers a low-code/no-code AI design solution for solving business problems.
Posted 1 day ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
JOB DESCRIPTION: The Fanatical Support for AWS team provides industry leading Fanatical Support™ to Rackspace customers as part of a global team. Rackspace is hiring AWS Cloud Engineers to deliver Fanatical Support with Amazon Web Services. Fanatical Support for AWS includes a wide range of services and features to help customers make the most out of their chosen hosting strategy. Using your deep technical expertise, you will help customers optimize their workloads by providing application focused assistance to build, deploy, integrate, scale and heal using native AWS and 3rd party tool-chains and automation oriented agile principles. Through both hands-on and consultative approaches, you will be responsible fo supporting customers with tasks including provisioning and modifying Cloud environments, performing upgrades, and addressing day-to-day customer deployment issues via phone and ticket. At Rackspace we pride ourselves on our ability to deliver fanatical experience - this means our support team blends technical expertise and strong customer oriented professional skills. Being successful in this role requires: Working knowledge of Amazon Web Services Products & Services, Relational and NoSQL Databases, Caching, Object and Block Storage, Scaling, Load Balancing, CDNs, Terraform, Networking etc Excellent working knowledge of Windows or Linux operating systems – experience of supporting and troubleshooting issues and performance Intermediate understanding of central networking concepts: VLANs, layer2/3 routing, access lists & load balancing Good understanding of design of native Cloud applications, Cloud application design patterns and practices Hands on knowledge using CloudFormation and/or Terraform JOB REQUIREMENTS: Key Accountabilities Build, operate and support AWS Cloud environments Assist customers in the configuration of backup, patching and monitoring of servers and services Build customer solutions, leveraging automation and delivery mechanisms for efficiency and scalability Respond to customer support requests via tickets and phone calls within response time SLAs Ticket Queue Management and Ticket triaging – escalating to senior engineers when required Troubleshoot performance degradation or loss of service as time critical incidents as needed Drive strong customer satisfaction (NPS) through Fanatical Support Ownership of issues, including collaboration with other teams and escalation Support the success and development of others in the team Key Performance Indicators: Customer Satisfaction scores - NPS Speed to online- Meeting required delivery times Performance indicators – Ticket queues, response times Quality indicators – Peer review, customer feedback PERSON SPECIFICATION: Technical achiever with a strong work ethic, creative, collaborative, team player A strong background in AWS and/or demonstrative hosting-specific technical skills: Compute and Networking Storage and Content Delivery Database Administration and Security Deployment and Management Application Services Analytics Mobile Services CloudFormation/Terraform
Posted 1 day ago
7.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Title: Manager – Senior ML Engineer (Full Stack) About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Job Summary : The Manager – Senior ML Engineer (Full Stack) will be responsible for leading the development and integration of Generative AI (GenAI) technologies, writing code modules, and managing full-stack development projects. The ideal candidate will have a strong background in Python and a proven track record in machine learning and full-stack development. Required Skills Strong proficiency in Python programming. Experience with data analysis and visualization libraries like Pandas, NumPy, Matplotlib, and Seaborn. Proven experience in machine learning and AI development. Experience with Generative AI (GenAI) development and integration. Full-stack development experience, including front-end and back-end technologies. Proficiency in web development frameworks such as Django or Flask. Knowledge of machine learning frameworks such as TensorFlow, Keras, PyTorch, or Scikit-learn. Experience with RESTful APIs and web services integration. Familiarity with SQL and NoSQL databases, such as PostgreSQL, MySQL, MongoDB, or Redis. Experience with cloud platforms like AWS, Azure, or Google Cloud. Knowledge of DevOps practices and tools like Docker, Kubernetes, Jenkins, and Git. Proficiency in writing unit tests and using debugging tools. Effective communication and interpersonal skills. Ability to work in a fast-paced, dynamic environment. Knowledge of software development best practices and methodologies. Key Responsibilities Lead the development and integration of Generative AI (GenAI) technologies to enhance our product offerings. Write, review, and maintain code modules, ensuring high-quality and efficient code. Oversee full-stack development projects, ensuring seamless integration and optimal performance. Collaborate with cross-functional teams to define project requirements, scope, and deliverables. Manage and mentor a team of developers and engineers, providing guidance and support to achieve project goals. Stay updated with the latest industry trends and technologies to drive innovation within the team. Ensure compliance with best practices in software development, security, and data privacy. Troubleshoot and resolve technical issues in a timely manner. Qualifications Bachelor’s degree in computer science or an Engineering degree Minimum of 7 years of experience in machine learning engineering or a similar role. Demonstrated experience in managing technology projects from inception to completion.
Posted 1 day ago
10.0 years
0 Lacs
India
On-site
Experience - 10+ Years Skills - Java + React + Docker + Kubernetes + RDBMS + NOSQL + Cloud (Azure/AWS) JD: We are seeking a highly skilled and experienced Tech Lead (Full-Stack) to join our dynamic and innovative team. As a Tech Lead (Full-Stack), you will be responsible for designing, developing, and implementing software solutions that enhance our products and services. Your expertise in front-end frameworks like Angular, API development using Java and Spring Boot, and experience with MongoDB will be critical to the success of our projects. Responsibilities: Are you an experienced Full Stack Engineer with a strong background in front-end frameworks, Java development, and microservices architecture? Are you passionate about leading agile Scrum teams, mentoring junior developers, and fostering a collaborative work environment? If so, we have an exciting opportunity for you! As a Tech Lead (Full-Stack) at CNHi, you will play a pivotal role in designing, developing, and maintaining our innovative software applications. Leveraging your expertise in Angular for front-end development, Java and Spring Boot for building APIs and microservices, and MongoDB for data management, you will contribute to the success of our projects. Your leadership skills and experience in guiding agile Scrum teams will be instrumental in driving efficient development processes and delivering exceptional solutions. Responsibilities: Lead agile Scrum teams, facilitating effective sprint planning, daily stand-ups, and retrospectives to achieve project objectives. Mentor and coach junior team members, fostering their professional growth and nurturing a collaborative work culture. Serve as a Scrum Master or Agile Coach, promoting agile principles and practices to optimize team productivity. Collaborate effectively with global stakeholders, understanding their requirements and providing valuable technical insights. Design and develop robust and user-friendly web applications using Angular and other modern front-end frameworks. Create scalable and reliable APIs and microservices using Java and Spring Boot, ensuring high performance and quality. Utilize MongoDB for efficient data storage and management, adhering to best practices. Demonstrate proficiency in Kubernetes, Docker, and container orchestration, enabling seamless deployment and scalability. Conduct thorough code reviews, ensuring adherence to best practices, design patterns, and SOLID principles. Identify and address technical challenges, proposing innovative solutions to improve application performance and reliability. Work closely with DevOps and infrastructure teams to streamline the deployment and monitoring processes. Stay updated with industry trends and emerging technologies, bringing new ideas and best practices to the team. Requirements: BTech Computer Science, MCA, or equivalent qualification. Demonstrated ability to lead agile Scrum teams effectively and mentor junior developers. Experience functioning as a Scrum Master or Agile Coach is highly desirable. Proven experience in developing web applications using Angular or similar front-end frameworks. Extensive hands-on experience in Java development and building APIs and microservices with Spring Boot. Solid understanding of MongoDB and database design principles. Proficiency in Kubernetes, Docker, and container orchestration for application deployment. Strong knowledge of code reviews, design patterns, and SOLID principles. Excellent communication and interpersonal skills to work collaboratively with global stakeholders. Proactive problem-solving skills and a passion for delivering high-quality software solutions. Ability to adapt quickly to changing requirements and prioritize tasks effectively NOTE: Staffing & Recruitment Companies are advised not to contact us.
Posted 1 day ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 10 The Role: Foreseer AI – Senior Engineer The Team: The Foreseer team delivers digital transformation solutions at EDO for information extraction from structures and semi structured documents and websites. Foreseer is a human in the loop platform that combines latest AI/ML advances with a state of the art UI for delivering multiple projects, all powered by a core distributed, cloud native, auto scalable framework. Team comprises of experts in Java and Python language and ML engineers. Responsibilities Include Support and foster a quality-first, agile culture that is built on partnership, trust and sharing Design, develop and maintain functionalities to create new solutions on the platform. Learning and understanding all aspects of the framework and the project deliverables. Be technically deep and able to write high quality code using industry best practices. Be responsible for implementation of new features and iterations of your project. Implement security measures and compliance standards to protect sensitive data and ensure adherence to industry regulations. Ensure the use of standards, governance and best practices in the industry to deliver high quality scalable solutions. Strategic thinker and influencer with demonstrated technical and business acumen and problem-solving skills. Experience & Qualifications BS or MS degree in Computer Science or Information Technology or equivalent. 6+ years hands on experience with Java ,J2EE and related frameworks and technologies (Spring, Restful services, Spring Boot, Spring JPA, Spring Security, MVC etc.). 2+ years of experience with designing and building microservices based distributed systems in serverless environment (containers platforms). 2+ years of experience in Active MQ, Distributed streaming platform or other related JMS providers. Proficient with Data structures and Algorithms. Experience in different data base technologies (like RDBMS, NOSQL) Experience in Containerization, Container management platforms Cloud platforms, CI/CD, and deployments through CI/CD pipelines, and AWS services like S3, EKS, EC2 etc. Proficiency in the development environment, including IDE, web & application server, GIT, Azure DevOps, unit-testing tool and defect management tools Nice To Have Skills Distributed systems programming. AI/ML solutions architecture. Knowledge of GenAI Platforms and tech stacks. Hands on experience with Elastic/Redis search. Hands on experience in Python Hands on experience in Vaadin Framework What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312747 Posted On: 2025-07-31 Location: Gurgaon, Haryana, India
Posted 1 day ago
8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role Description This is a full-time, on-site role located in Kolkata for a Java Fullstack Developer with 8+ years of experience. The Java Fullstack Developer will be responsible for designing and implementing web applications using Java technologies, front-end and back-end development, collaborating with cross-functional teams, and maintaining existing codebases. The role also includes tasks such as writing unit tests, troubleshooting and debugging applications, and ensuring performance optimization of the applications. Qualifications Strong experience in Java, Spring, and Hibernate Proficiency in front-end technologies such as HTML, CSS, JavaScript, Angular, or React Experience with database technologies such as SQL, NoSQL, and ORM Familiarity with DevOps practices and tools such as Docker, Jenkins, and Kubernetes Knowledge and experience in cloud platforms such as AWS, Azure, or GCP Strong problem-solving skills and the ability to troubleshoot and debug applications Excellent communication and collaboration skills Bachelor's degree in Computer Science, Engineering, or related field Experience in the media industry is a plus
Posted 1 day ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking highly skilled and motivated AI Engineers with Strong Python experience and familiar with prompt engineering and LLM integrations to join our Innovations Team. The team is responsible for exploring emerging technologies, building proof-of-concept (PoC) applications, and delivering cutting-edge AI/ML solutions that drive strategic transformation and operational efficiency. About the Role As a core member of the Innovations Team, you will work on AI-powered products, rapid prototyping, and intelligent automation initiatives across domains such as mortgage tech, document intelligence, and generative AI. Responsibilities Design, develop, and deploy scalable AI/ML solutions and prototypes. Build data pipelines, clean datasets, and engineer features for training models. Apply deep learning, NLP, and classical ML techniques. Integrate AI models into backend services using Python (e.g., FastAPI, Flask). Collaborate with cross-functional teams (e.g., UI/UX, DevOps, product managers). Evaluate and experiment with emerging open-source models (e.g., LLaMA, Mistral, GPT). Stay current with advancements in AI/ML and suggest opportunities for innovation. Qualifications Educational Qualification: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Certifications in AI/ML or cloud platforms (Azure ML, TensorFlow Developer, etc.) are a plus. Required Skills Technical Skills: Programming Languages: Python (strong proficiency), experience with NumPy, Pandas, Scikit-learn. AI/ML Frameworks: TensorFlow, PyTorch, HuggingFace Transformers, OpenCV (nice to have). NLP & LLMs: Experience with language models, embeddings, fine-tuning, and vector search. Prompt Engineering: Experience designing and optimizing prompts for LLMs (e.g., GPT, Claude, LLaMA) for various tasks such as summarization, Q&A, document extraction, and multi-agent orchestration. Backend Development: FastAPI or Flask for model deployment and REST APIs. Data Handling: Experience in data preprocessing, feature engineering, and handling large datasets. Version Control: Git and GitHub. Database Experience: SQL and NoSQL databases; vector DBs like FAISS, ChromaDB, or Qdrant preferred. Nice to Have (Optional): Experience with Docker, Kubernetes, or cloud environments (Azure, AWS). Familiarity with LangChain, LlamaIndex, or multi-agent frameworks (CrewAI, AutoGen). Soft Skills: Strong problem-solving and analytical thinking. Eagerness to experiment and explore new technologies. Excellent communication and teamwork skills. Ability to work independently in a fast-paced, dynamic environment. Innovation mindset with a focus on rapid prototyping and proof-of-concepts. Experience Level: 3–7 years, only Work from Office (Chennai location)
Posted 1 day ago
0 years
0 Lacs
India
On-site
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code. Responsibilities: Lead backend Python development for innovative healthcare technology solutions Oversee a backend team to achieve product and platform goals in the B2B HealthTech domain Design and implement scalable backend infrastructures with seamless API integration Ensure availability on immediate or short notice for efficient onboarding and project ramp-up Optimize existing backend systems based on real-time healthcare data requirements Collaborate with cross-functional teams to ensure alignment between tech and business goals Review and refine code for quality, scalability, and performance improvements Ideal Candidate: Experienced in building B2B software products using agile methodologies Strong proficiency in Python, with a deep understanding of backend system architecture Comfortable with fast-paced environments and quick onboarding cycles Strong communicator who fosters a culture of innovation, ownership, and collaboration Passionate about driving real-world healthcare impact through technology Skills Required: Primary: TypeScript, AWS, Python, RESTful APIs, Backend Architecture Additional: SQL/NoSQL databases, Docker/Kubernetes (preferred) Strongly Good to Have: Prior experience in Data Engineering , especially in healthcare or real-time analytics Familiarity with ETL pipelines , data lake/warehouse solutions , and stream processing frameworks (e.g., Apache Kafka, Spark, Airflow) Understanding of data privacy, compliance (e.g., HIPAA) , and secure data handling practices Hiring Process Profile Shortlisting Tech Interview Tech Interview Culture Fit
Posted 1 day ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Description We are seeking a Senior Frontend Developer to help us develop and maintain outstanding web products. You will join the Dev & API team in the international Digital Customer Experience organization, which includes over 140 team members from around the world We develop, support, and lead the digital customer experience at Danfoss. We combine business needs with IT to create digital solutions. Want to learn more about the Danfoss Digital Customer Experience Program (DCE) – Visit us here . Job Responsibilities Developing new web applications with React JS and TypeScript. Supporting existing web products. Ensuring that Danfoss web products deliver excellent performance, accessibility, and usability. Diagnosing problems, inefficiencies, and weaknesses in solutions and working to improve them. Writing high-quality code with the team, focusing on collaboration through pair programming, code reviews, and comprehensive documentation. Why You’ll Love This Role Impactful Projects: Avoid monotony! Contribute to three projects that are the trailblazers in Danfoss, with regard to the MACH principles. Powersource.danfoss.com Designcenter.danfoss.com Semikron-danfoss.com Several legacy solutions The three solutions above are built on a shared configurable shell application and use different configurable components on top to provide features specific to each solution. Modern Tech Stack: Danfoss commits to continuous improvement. We adopt the latest technologies (N-1 approach) to ensure security, scalability, and performance. Collaborative Culture: Work alongside dedicated developers in a global team that values innovation and knowledge-sharing. Background & Skills Proficiency in English skills, both verbal and written, is essential, with a minimum level of B2. Over 3 years of experience required. In-depth knowledge of modern TypeScript, JavaScript, CSS/SCSS, and HTML. Strong understanding of React.js. Experience with React.js ecosystem frameworks like Next.js and libraries such as Redux. Familiarity with testing and test automation, including unit, integration, and end-to-end testing. Knowledge of frontend tooling like Webpack or Vite. Experienced in integrating and developing services using OpenAPI, REST, and GraphQL. Familiar with DevOps and CI/CD pipelines, including automated deployments to PaaS infrastructure and IaaS. Knowledge of Web components, Playwright, Node.js, C#, Kubernetes, SQL, and NoSQL databases is a plus. At Danfoss, we believe that a diverse and inclusive workplace fosters creativity, innovation, and a broader perspective in decision-making. When you consider this job posting, do you feel like your profile is not a perfect match? Numerous studies have found that women and people of color are more likely to apply only when they meet all requirements listed in the job posting. Even if you do not check all the boxes, we encourage you to apply anyway. We are curious to find out how you can bring new insights to the role or to Danfoss as an organization. Employee Benefits We are excited to offer you the following benefits with your employment: Bonus system Paid vacation Flexible working hours Possibility to work remotely Pension plan Personal insurance Communication package Opportunity to join Employee Resource Groups State of the art virtual work environment Employee Referral Program This list does not promise or guarantee any particular benefit or specific action. They may depend on country or contract specifics and are subject to change at any time without prior notice. Danfoss – Engineering Tomorrow At Danfoss, we are engineering solutions that allow the world to use resources in smarter ways - driving the sustainable transformation of tomorrow. No transformation has ever been started without a group of passionate, dedicated and empowered people. We believe that innovation and great results are driven by the right mix of people with diverse backgrounds, personalities, skills, and perspectives, reflecting the world in which we do business. To make sure the mix of people works, we strive to create an inclusive work environment where people of all backgrounds are treated equally, respected, and valued for who they are. It is a strong priority within Danfoss to improve the health, working environment and safety of our employees. Following our founder’s mindset “action speaks louder than words”, we set ourselves ambitious targets to protect the environment by embarking on a plan to become CO2 neutral latest by 2030. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or other protected category.
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Python + AWS/DataBricks Developer 📍 Hyderabad (Work from Office) 📅 5+ years experience | Immediate joiners preferred 🔹 Must-have Skills: Expert Python programming (3.7+) Strong AWS (EC2, S3, Lambda, Glue, CloudFormation) DataBricks platform experience ETL pipeline development SQL/NoSQL databases PySpark/Pandas proficiency 🔹 Good-to-have: AWS certifications Terraform knowledge Airflow experience Interested candidates can share profiles to shruti.pandey@codeethics.in Please mention the position you're applying for! #Hiring #ReactJS #Python #AWS #DataBricks #HyderabadJobs #TechHiring #WFO
Posted 1 day ago
0.0 - 3.0 years
0 - 0 Lacs
Dum Dum, Kolkata, West Bengal
On-site
Job Title: Web Developer (Laravel/Python/React) Job Type: Full-Time Location: Motijheel, Dumdum, Kolkata, West Bengal Experience Level: 3-7 Years Salary: Competitive, based on experience Job Overview We are seeking a skilled and security-conscious Web Developer with hands-on experience in Laravel (PHP), Python (Django/Flask), and React.js to build and maintain robust, scalable, and secure web applications. The ideal candidate will work on developing a multi-level user access system and integrate enterprise-grade security features across all layers of the platform. Key Responsibilities Design, develop, and maintain web applications using Laravel , Python , and React.js Build a multi-level architecture (e.g., Super Admin, Admin, User, API Consumers) with Role-Based Access Control (RBAC) Develop and enforce high-level security protocols , including: CSRF, XSS, SQL Injection protection Two-Factor Authentication (2FA) JWT/OAuth2 authentication flows Secure file upload/download SSL/TLS configuration and HTTPS enforcement Session management and secure cookies Audit trails and logging Integrate RESTful APIs and manage real-time data with WebSockets if needed Perform unit testing, integration testing , and penetration testing Optimize applications for speed , scalability , and security Ensure data encryption in transit and at rest Regularly update libraries and dependencies to patch vulnerabilities Collaborate with UI/UX designers to implement wireframes and design specs Document code, workflows, APIs, and system architecture Required Skills Proficient in Laravel Framework Proficient in Python (Flask or Django) Strong frontend skills using React.js , Redux, Hooks Experience in designing secure, scalable and modular applications Strong knowledge of MySQL/PostgreSQL and MongoDB Familiarity with CI/CD pipelines , Docker, and Git Strong understanding of web security best practices Familiarity with Linux-based servers (Ubuntu/CentOS) Ability to write clean, maintainable, and well-documented code Preferred Skills Experience with payment gateway integrations Working knowledge of cloud platforms (AWS, GCP, or Azure) Experience with microservices architecture Familiarity with GraphQL , Firebase, or NoSQL systems Experience with message queues like RabbitMQ or Redis DevOps knowledge: Nginx, Docker, Kubernetes is a plus Security Compliance & Tools OWASP Top 10 Compliance GDPR/ISO 27001 Data Handling Practices SAST/DAST Scanning Tools (SonarQube, OWASP ZAP) GitHub Code Security Policies Application Firewall (WAF) Integration Penetration Testing Tools: Burp Suite, Nessus Qualifications Bachelor’s degree in Computer Science or related field 3–7 years of experience in full-stack development Proven experience building secure multi-tiered web apps Perks & Benefits Performance bonuses Opportunity to work on high-impact, secure enterprise projects How to Apply Email your resume and a portfolio of your recent projects to: globechealth@gmail.com Subject Line: Web Developer – Laravel/Python/React Job Type: Full-time Pay: ₹10,000.00 - ₹25,000.00 per month Ability to commute/relocate: Dum Dum, Kolkata, West Bengal: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Web development: 3 years (Preferred) Language: Bengali (Preferred) Hindi (Preferred) English (Preferred) Location: Dum Dum, Kolkata, West Bengal (Preferred) Work Location: In person Speak with the employer +91 9432719713
Posted 1 day ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Principal Engineer You’ll drive development software and tools to accomplish project and departmental objectives by converting functional and non-functional requirements into a suitable design As well managing the technical delivery of one or more software engineering teams, you’ll lead the wider participation in internal and industry wide events, conferences and other activities By leading the planning, specifying, development and deployment of high performance, robust and resilient systems, you’ll ensure they follow excellent architectural and engineering principles, and are fit for purpose We're offering this role at vice president level What you'll do As a Principal Engineer, you’ll oversee the productivity of the software engineering teams, and you’ll be responsible for the consistent use of shared platform components and technologies. Leading engagements with senior stakeholders, you’ll explore and suggest appropriate technical solutions to achieve the required product features. Other responsibilities will include: Monitoring technical progress against plans, while safeguarding functionality, scalability and performance, and providing progress updates to stakeholders Delivering software components to enable the delivery of platforms, applications and services for the organisation Designing and develop high-volume, high-performance, high-availability applications, using proven frameworks and technologies, and designing reusable libraries and APIs for use across the organisation Writing unit and integration tests within automated test environments to ensure code quality The skills you'll need We’re looking for someone with a background in software engineering, software design or database design and architecture, and significant experience developing software in an SOA or micro-services paradigm. Along with a background in leading software development teams, introducing and executing technical strategies, you’ll have development experience in one or more programming languages. We’ll also be looking for: At least 10 years of experience with Java, Spring Boot, and microservices Experience using industry recognised frameworks and development tooling An excellent understanding of implementing programming best practice, especially around scalability, availability and performance Experience of test-driven development alongside the use of automated test frameworks, as well as mocking, stubbing and unit testing tools Experience of project working and commercial development processes, particularly Agile methodologies Knowledge of working with code repositories, bug tracking tools and wikis A background in designing or implementing APIs, and an-depth knowledge of large scale database and NoSQL design and optimisation
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Vice President, Product Management-Technical Overview Mastercard seeks to define a world beyond cash. To accelerate this mission, we are committed to building and scaling products as well as applications that transform payments of any type through consistent customer experiences. The Customer Connectivity Platforms Program (“Platform Services”) was created to provide the tools to support scalable, safe ways for our customers to interact with Mastercard through enterprise messaging services, API and event gateways and file transfer services. As the Vice President of Product Management-Technical for Platform Services you will work with a global team to set the vision for this program alongside engineering counterparts; execute upon a clear roadmap driving incremental value through each release; and, provide the necessary stakeholder management and customer engagement both internally and externally to ensure Mastercard is building platforms that scale to the needs of our business and customers. You have a curiosity to keep abreast on the latest technologies and have a proven track record of being a thought leader and influential advocate for building and scaling global platform products. Job Description Lead a global organization defining the structures and operating mechanisms that optimize the skills and execution methodology of your team Own the long-term view (3-5 years) and complete platform portfolio and make trade-off decisions within it Work with other product teams to ensure strategies are aligned Drive a data driven and continuous learning culture across your organization Ensure team culture consistently demonstrates alignment with leadership principles; visibly ties department decisions to leadership principles and tenets Develop a culture of test and learn experimentation and innovation to solve core business and end customer needs in the platform product domain Define and build your own scorecard to continuously monitor and analyze key performance indicators and use this data to drive incremental improvements Rigorously inspect the platform products and execution metrics of your PM-Ts using your own tech depth and business acumen to assess risks and continuously raising the delivery bar Manage escalations Actively encourage the development of your managers and your technical staff while continuously raising the bar on performance, technical depth, product management results, and people management as you hire new PM-Ts Engage directly with customers to help shape product roadmap Experiences Experience and proficiency with cloud technologies (IaaS, PaaS, serverless technology, NoSQL databases), microservice design, near real-time, scalable fault tolerant platform design, API design, and distributed systems operations in a DevOps model Demonstrated ability to operate with complete independence and autonomy Proven track record of data driven decision making and applying continuous improvement methodologies across teams Demonstrated experience as a leader of leaders with the ability to develop talent (both managers and individual contributors) Able to hold your own in debates with technical architects, engineers, and designers Comprehensive experience in agile delivery methodologies Can communicate to executives, peers, and staff with impact, eloquence, and authenticity Demonstrated experience building organizational relationships, partnering with and influencing executive leadership while commanding the respect of the individual engineers across the organization Formalize best practices into frameworks and evangelizes with other Mastercard teams Help leadership craft responses for escalations of customer/business facing product issues Has thrived and succeeded in delivering high quality technology products/services in a high growth environment where priorities shift rapidly Proven ability to lead in a matrix environment Strong interpersonal and talent management skills, including the ability to identify and develop product management talent Able to understand and navigate complex regulatory requirements Qualifications Business acumen Customer-first mindset Excellent written and verbal communications skills Business and/or software engineering degree preferred Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 1 day ago
50.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Who we are: Irdeto is the world leader in digital platform cybersecurity, empowering businesses to innovate for a secure, connected future. Building on over 50 years of expertise in security, Irdeto’s services and solutions protect revenue, enable growth and fight cybercrime in video entertainment, video games, and connected industries including transport, health and infrastructure. Irdeto is the security partner dedicated to empowering a secure world where people can connect with confidence. With teams and offices around the world, Irdeto’s greatest asset is its people - our diversity is celebrated through an inclusive workplace, where everyone has an equal opportunity to drive innovation and contribute to Irdeto's success. The Role: As a Lead Software Engineer you will be joining our Video Entertainment team, and will play a pivotal role in developing and enhancing our Solutions and products. You'll work as part of a dynamic and cross-functional team to ensure the seamless delivery of high-quality Deliverables. You will work on the latest technologies in the streaming industry and Your expertise will contribute to the innovation and enhancement of our solutions, ensuring our global customers have the best possible experience. Your mission at Irdeto: Develop and maintain software applications and services for our OTT platform, ensuring high-performance, scalability, and reliability. Collaborate with cross-functional teams, including product managers, designers, and other engineers, to design and implement new features and improvements. Identify and address performance bottlenecks, security vulnerabilities, and system scalability issues. Debug, troubleshoot, and resolve software defects and performance issues, ensuring a seamless user experience. Mentor junior engineers and participate in knowledge sharing within the team. How can you add value to the team? Bachelor’s degree or master’s in computer science, Software Engineering, or a related field. 8+ years of experience in backend development with modern frameworks (Node.js, Go, or Java preferred) Deep understanding of REST APIs, microservices, asynchronous processing, and scalable architectures Experience with cloud platforms (AWS, GCP, or Azure) and container orchestration (Docker, Kubernetes) Familiarity with AI/ML pipelines – either integrating ML models into backend or building services to serve AI functionality Hands-on experience with databases (SQL and NoSQL), caching, and pub/sub messaging systems (Kafka, RabbitMQ) Strong grasp of security, performance, and reliability considerations in streaming systems Excellent communication skills and a passion for collaborative problem-solving What you can expect from us: We invest in our talented employees and promote collaboration, creativity, and innovation while supporting health and well-being across our global workforce. In addition to competitive remuneration, we offer: A multicultural and international environment where diversity is celebrated Professional education opportunities and training programs Innovation sabbaticals Volunteer Day State-of-the-art office spaces Additional perks tailored to local offices (e.g., on-site gyms, fresh fruit, parking, yoga rooms, etc.) Equal Opportunity at Irdeto Irdeto is proud to be an equal opportunity employer. All decisions are based on qualifications and business needs, and we do not tolerate discrimination or harassment. We welcome applications from individuals with diverse abilities and provide accommodation during the hiring process upon request. If you’re excited about this role but don’t meet every qualification, we encourage you to apply. We believe diverse perspectives and experiences make our teams stronger. Welcome to Irdeto!
Posted 1 day ago
50.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Who we are: Irdeto is the world leader in digital platform cybersecurity, empowering businesses to innovate for a secure, connected future. Building on over 50 years of expertise in security, Irdeto’s services and solutions protect revenue, enable growth and fight cybercrime in video entertainment, video games, and connected industries including transport, health and infrastructure. Irdeto is the security partner dedicated to empowering a secure world where people can connect with confidence. With teams and offices around the world, Irdeto’s greatest asset is its people - our diversity is celebrated through an inclusive workplace, where everyone has an equal opportunity to drive innovation and contribute to Irdeto's success. The Role: As a Software Engineer you will be joining our Video Entertainment team and will play a pivotal role in developing and enhancing our Solutions and products. You'll work as part of a dynamic and cross-functional team to ensure the seamless delivery of high-quality Deliverables. You will work on the latest technologies in the streaming industry and Your expertise will contribute to the innovation and enhancement of our solutions, ensuring our global customers have the best possible experience. Your mission at Irdeto: Develop and maintain software applications and services for our OTT platform, ensuring high- performance, scalability, and reliability. Debug, troubleshoot, and resolve software defects and performance issues, ensuring a seamless user experience. Write clean, efficient, and maintainable code, following coding standards and software development processes. Stay up to date with industry trends and best practices and contribute to the continuous improvement of our software development processes. How you can add value to the team? Bachelor’s degree in computer science, Software Engineering, or a related field. 3 + years of experience in backend development with modern frameworks (Node.js, Go, Typescript, or Java preferred) Deep understanding of REST APIs, microservices, asynchronous processing, and scalable architectures Experience with cloud platforms (AWS, GCP, or Azure) and container orchestration (Docker, Kubernetes) Familiarity with AI/ML pipelines – either integrating ML models into backend or building services to serve AI functionality Hands-on experience with databases (SQL and NoSQL), caching, and pub/sub messaging systems (Kafka, RabbitMQ) Strong grasp of security, performance, and reliability considerations in streaming systems Excellent communication skills and a passion for collaborative problem-solving What you can expect from us: We invest in our talented employees and promote collaboration, creativity, and innovation while supporting health and well-being across our global workforce. In addition to competitive remuneration, we offer: A multicultural and international environment where diversity is celebrated Professional education opportunities and training programs Innovation sabbaticals Volunteer Day State-of-the-art office spaces Additional perks tailored to local offices (e.g., on-site gyms, fresh fruit, parking, yoga rooms, etc.) Equal Opportunity at Irdeto Irdeto is proud to be an equal opportunity employer. All decisions are based on qualifications and business needs, and we do not tolerate discrimination or harassment. We welcome applications from individuals with diverse abilities and provide accommodation during the hiring process upon request. If you’re excited about this role but don’t meet every qualification, we encourage you to apply. We believe diverse perspectives and experiences make our teams stronger. Welcome to Irdeto!
Posted 1 day ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description - External Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - External Required Qualifications: B.E/B.Tech/M.Tech/MCA Ability to address the upcoming deliverables/work orders/tickets independently. For this hands on below technologies Solid Hands on Java 17 and above Solid Hands on Spring, Spring Boot, Hibernate, JSF and ReactJS Hands on Web services (REST/Micro services/API) Hands on Unix scripting Hands on RDBMS DB like Oracle, SQL & DB2 Hands on NoSQL DB preferably MongoDB Hands on Kafka messaging services Hands on eclipse or STS Hands on JBoss and WAS Hands on Cloud (Preferred GCP) Hands on DevOps Ability to work on GitHub Actions/Dev ops model Solid analytical, debugging and performance tuning skills Ability to Interact with business. Hence, good communication skill set Preferred Qualifications: Knowledge on Grafana, Elastic APM Knowledge on Cucumber Knowledge on Kubernetes
Posted 1 day ago
5.0 - 12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Data Software Engineer Location:Chennai and Coimbatore Mode:Hybrid Interview:Walkin 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough