Jobs
Interviews

169 Ontologies Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title: Advanced Unstructured Data Management(Data Architecture, Data Governance, Data Quality) · Location: Pan India(Hybrid) · Experience: 12+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners Only(Please don't apply Notice period is more than 15 days) Mandatory Skills: Job Description given below: This role will support in creation of advanced data management guidelines (data architecture, data governance, data quality) towards managing / governing unstructured data at enterprise level e.g. defining the target operating model (defining the key stakeholders, R&R etc.) Drive adoption of Unstructured Data Management guidelines across business units and global functions Clearly articulate the value of managing unstructured data governance to key stakeholders like enterprise data owners, domain data architects, AI office etc. Drive the identification of functional requirements across business functions for designing the technology stack for governing unstructured data at enterprise level Drive the enterprise guidelines on building a semantic layer (ontologies & knowledge graphs) for managing unstructured data

Posted 1 month ago

Apply

0.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka

Remote

About Skypoint Skypoint is a HITRUST r2-certified data unification and agentic AI platform that accelerates productivity and efficiency for healthcare organizations. Our platform empowers healthcare providers, payers, and senior care organizations to unify siloed data, model industry-specific ontologies, and deploy AI agents that automate workflows and enhance decision-making. Founded in 2020 in Portland, Oregon, Skypoint has grown to over 75 employees and serves more than 100 customers. We are proud to be ranked #26 on Deloitte’s 2024 Technology Fast 500™ list, recognizing the fastest-growing tech companies across North America, driven by our exceptional revenue growth over the past three years. Location: Global Technology Park, Marathahalli Outer Ring Road, Bellandur, Bengaluru, Karnataka -5 days/week (No hybrid or remote) Responsibilities: Collaborate with stakeholders to identify challenges and deliver tailored DevOps solutions. Design and implement DevOps architectures, roadmaps, and plans in alignment with the Azure Well-Architected Framework. Establish and manage Azure governance through Azure Policies, Azure Active Directory (AAD), and Azure RBAC. Build and maintain CI/CD pipelines using Azure DevOps (YAML or classic) for fully automated deployments. Automate cloud resource provisioning and management using Infrastructure as Code tools such as ARM, Bicep, and Terraform. Assess existing infrastructure and applications, provide optimization recommendations, and generate audit reports. Lead containerization initiatives, including Kubernetes-based architectures and deployment strategies. Ensure compliance with security, performance, and cost-efficiency standards. Stay current on emerging technologies, including DevOps, SecOps, and AI tools, to drive continuous improvement. Communicate technical concepts effectively to technical and non-technical stakeholders alike. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.8-10 years of hands-on experience in DevOps, Site Reliability Engineering, or a related role. Proven expertise in deploying and managing Azure cloud environments. Strong proficiency in Azure resource management, cost optimization, and monitoring. Deep knowledge of CI/CD tools and Infrastructure as Code (Terraform proficiency is a must).Extensive experience with Kubernetes and containerized workload management. Familiarity with the Azure Well-Architected Framework for building secure and cost-effective solutions. Exceptional analytical, troubleshooting, and problem-solving skills. Excellent communication and leadership abilities. Experience as a foundational engineer in a startup is a plus. Certifications Microsoft Certified: Azure Solutions Architect ExpertMicrosoft Certified: Azure DevOps Engineer Expert Preferred Background: Experience working in healthcare technology, clinical data systems, or regulatory-compliant SaaS environments. Passion for building intelligent systems that have a real-world impact on healthcare outcomes. Life at Skypoint Life at Skypoint is vibrant and forward-thinking, focused on harnessing the power of AI and advanced technologies to innovate and solve real-world challenges. Our culture thrives on creativity, strategic thinking, and a commitment to excellence, offering a collaborative environment where every contribution is valued. We are dedicated to fostering personal and professional growth, ensuring team members have opportunities for advancement through continuous training and a flexible work-life balance. Skypoint offers competitive benefits, including comprehensive health insurance and retirement plans. What We Offer: Competitive compensation with stock options Comprehensive health benefits, including OPD & gym reimbursements and mental wellness support Onsite opportunity Continuous learning and career growth opportunities Join us to be part of a dynamic team that's shaping the future with groundbreaking solutions in AI and technology, all while enjoying a supportive and inclusive workplace

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Responsibilities Design, develop and implement solutions for a wide range of NLP use cases involving classification, extraction and search on unstructured text data Create and maintain state of the art scalable NLP solutions in Python/ Java/ Scala for multiple business problems. This involves: Choosing most appropriate NLP technique(s) based on business needs and available data Performing data exploration and innovative feature engineering Training and tuning a variety of NLP models / solutions which include regular expressions, traditional NLP models as well as SOTA transformer based models Augmenting models by integrating domain specific ontologies and/or external databases Reporting and Monitoring the solution outcome Work experience with document-oriented databases such as MongoDB Collaborate with ML engineering team to deploy NLP solutions in production - both on premise as well as cloud deployment Interact with clients and internal business teams to perform solution feasibility as well as design and develop solutions Open to working across different domains – Insurance, Healthcare and Financial Services etc. Required Skills Experience (including graduate school) on training machine learning models, applying and developing text mining and NLP techniques Exposure to OCR and computer vision Experience in extracting content from documents is preferred Experience (including graduate school) with Natural Language Processing techniques is required Hands on experience with Natural Language Processing tools such as Stanford CORE-NLP, NLTK, spaCy, Gensim, Textblob etc. Experience/ Familiarity with document clustering in supervised un un-supervised scenarios Expertise in at least two of the state of the art techniques in NLP like BERT, GPT, XL Net etc. Applied experience of machine learning algorithms using Python Organized, self-motivated, disciplined and detail oriented Production level coding experience in Python is required Ability to read recent ML research papers and adapt those models to solve real-world problems Experience with any deep learning framework, including Tensorflow, Caffe, MxNet, Torch, Theano Experience with optimization on GPUs (a plus) Hands on experience with using cloud technologies on AWS/ Microsoft Azure is preferred

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: · 3+ years of experience in implementing analytical solutions using Palantir Foundry. · · preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. · · Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. · · Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. · · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · · At least 3 years of experience with Foundry services: · · Data Engineering with Contour and Fusion · · Dashboarding, and report development using Quiver (or Reports) · · Application development using Workshop. · · Exposure to Map and Vertex is a plus · · Palantir AIP experience will be a plus · · Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. · · Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. · · Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). · · Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. · · Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. · · Experience in MLOps is a plus. · · Experience in developing and managing scalable architecture & working experience in managing large data sets. · · Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. · · Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. · · A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. · · Experience in developing GenAI application is a plus Mandatory skill sets: · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · At least 3 years of experience with Foundry services Preferred skill sets: Palantir Foundry Years of experience required: Experience 4 to 7 years ( 3 + years relevant) Education qualification: Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Sr. Specialist, Product Experience Design (UX Designer / Researcher) Summary We are looking for an experienced User Experience/Interaction Designer passionate about the possibilities of data and data-driven experience design with good knowledge and experience of User Interface Design to join our rapidly growing, highly-innovative data and services Product Development team at our development center in Pune. The design function works as part of wider cross-functional teams that form organically around data product ideas – collectively contributing to the rapid iteration of Prototypes and MVPs (Minimal Viable Product), mainly in the B2B and B2B2C arenas. Each design iteration is exposed to end-users at each cycle of our rapid design-test-redesign process, to ensure optimum user traction. Prototype and MVP outputs include dashboards; widgets; chatbots; web applications; mobile applications; VUI voice interfaces; digital assistants; etc. – all in service of solving complex business challenges and with an ever-increasing emphasis on Artificial Intelligence (AI) and Machine Learning (ML). Role Design of Proof-of-Concepts and Minimum Viable Data Products (MVPs) using human-centric design principles and rapid prototyping (Lean UX and Design Thinking) Participate in "Design Thinking" workshops – collaborating cross-functionally with industry verticals, regional leads and end-users to ensure optimal digital data products Explore the “art-of-the-possible” in the era of Big Data, Artificial Intelligence (AI) and Machine Learning (ML) – whilst always maintaining regulatory compliance Leverage existing – and contribute net new – design patterns to Mastercard’s design pattern library Help define the next generation of data and data-driven products – and through doing so, help shape the future of Mastercard and its growth Creation of a cohesive and compelling visual language across diverse form factors: web, mobile and internet of things Work on future state conceptual designs, driving experimentation that improves the quality of product design overall. Work closely with key partners from brand & marketing, alongside the broader user experience team to drive delightful and highly usable transactional experiences leveraging the broader visual language for MasterCard. Work closely with our technology team to define implementation standard for our products leveraging modern presentation layer practices such as adaptive/responsive web and current and forward-thinking technologies. Ensure that designs deliver an appropriate balance of optimizing business objectives & user engagement, in close partnership with product managers. Liaise with regional and country teams to ensure that designs are reflective of the diversity of needs for a global user base Drive VXD design across a broader ecosystem, where experiences will require tailoring to meet the needs of both MasterCard customers and end consumers Help define the next generation of data and data-driven products and their visualization – and through doing so, help shape the future of Mastercard and its growth Interaction Design Required Experience / Knowledge / Skills (Core) Overall 4-6 years of career experience Experience developing User Archetypes and Personas and user journeys to inform product design decisions Experience in rapid prototyping (Lean UX and Design Thinking) Experience in articulating elegant and engaging experiences using sketches, storyboards, information architecture blueprints and prototypes Experience in the implementation of creative, useable and compelling visual mockups and prototypes. Experience working with complex information architectures Experience of design-experiences across multiple media Experience using prototyping/wireframing tools such as Figma, Sketch, Adobe Experience Design, Adobe Illustrator etc. An understanding of complex information architectures for digital applications Additional Experience / Knowledge / Skills Experience in articulating elegant and engaging visual experiences using sketches, storyboards, and within prototypes Highly proficient in the creation of useable, compelling and elegant visual mockups and prototypes. Experience of visual design across multiple media and form factors: web; mobile; Internet-of-Things (IOT) Extensive and demonstrable experience using leading VXD software packages such as Adobe Creative Cloud / Suite (Photoshop, InDesign, Illustrator etc.) / Experience Design, Serif DrawPlus, Corel DRAW Graphics Suite / PaintShopPro, Art Rage, Xara or equivalent An understanding of complex information architectures for visual representation Any experience in motion graphic design is a strong plus Any experience in 3D modeling also a strong plus Research Experience consuming UX research Experience in leading “Design Thinking” workshops with customers to identify requirements and ideate on potential product solutions. Ability to identify best-in-class user experience through competitor analysis Experience of iterative “design-test-re-design” methodology to collect real user feedback to incorporate back into the design Candidate Prior experience working in a world-beating UX team Experience working/multi-tasking in an extremely fast-paced startup-like environment Experience in client-facing engagements, preferably in leading them Empathetic champion of the user – passionate about the detail of great usability, interaction design and aesthetics to give the best possible UX Passionate about the possibility of data driven experiences – so-called “emergent ontologies” (patterns in the data) driving UX and UI. Passionate about the potential for data, Artificial Intelligence (AI) and Machine Learning ML) Defensible point-of-view on and Adaptive v Responsive v Liquid/Fluid UI Defensible point-of-view on gamification Interest in VUIs (Voice Interfaces) Demonstrable knowledge of UX and Interaction Design heuristics and best practices: e.g. Lean UX; Mobile First etc. Demonstrable knowledge of ergonomic and usability best practices Bachelors or Masters Degree in Design for Interactive Media, or equivalent experience World-beating portfolio covering multiple form-factors: desktop; tablet; mobile; wearable; other Demonstrable commitment to learning: insatiable to discover and evaluate new concepts and technologies to maximize design possibility. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250977 Show more Show less

Posted 1 month ago

Apply

175.0 years

0 Lacs

Gurugram, Haryana, India

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you’ll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express Join Team Amex and let's lead the way together. How we serve our customers is constantly evolving and is a challenge we gladly accept. Whether you’re finding new ways to prevent identity fraud or enabling customers to start a new business, you can work with one of the most valuable data sets in the world to identify insights and actions that can have a meaningful impact on our customers and our business. And, with opportunities to learn from leaders who have defined the course of our industry, you can grow your career and define your own path. Find your place in risk and analytics on #TeamAmex. The Platforms and Capabilities team within Global Risk and Compliance (GRC) is responsible for building and implementing leading-edge platforms and solutions for risk management. Our vision is to “ provide best-in-class Platforms and Capabilities that enable the risk management framework in GRC and across the Company and empower colleagues to excel at risk management activities.” American Express is on a mission to evolve risk management across all risk domains and stripes (Enterprise Risk, Operational Risk, Compliance Risk, Privacy Risk etc.) A key part of this are the technology solutions and platforms. We are seeking a Director, Digital Product Management, Integrated Risk Management to lead this multi-year effort. This is a newly created role, and the Director will be responsible for suite of solutions within the within Integrated Risk Management (IRM) platform. Responsibilities: Develop and drive the strategic vision for owned modules/component within Integrated Risk Management (IRM) platform(s) which is line with the AXP’s core risk management vision. Connect the vision to that of their respective risk domains Establish a multi-year roadmap for execution and implementation Partner with stakeholders across AXP for the vision, roadmap, planning and execution. Business partners will include risk management organizations across AXP – BU-level Control Management teams (1LOD), risk domain teams in the Independent Risk Management organization (2LOD) and Internal Audit (3LOD), Technology and many others Track and manage execution of multiyear initiative – prioritize and sequence deliverables, host agile ceremonies, manage risks and issues, report status to senior leaders, etc. Manage transition from the existing platform(s) to the new IRM platform(s) Manage the overall platform governance across various AXP functions including prioritization, requirements, and any conflicts that arise Manage the roll-out plans and adoption with various AXP functions including organizational change management As it relates to the end-to-end architecture vision, manage the definition and execution of integrations with various risk and enterprise applications Build, lead, and develop a diverse team of high-performing Risk Management and Product professionals executing against highly complex and critical projects and governance activities Nurture and mentor talent across the team. Qualifications A bachelor's degree in computer science, engineering, information systems, or a related field. An advanced degree (M.S. or Ph.D.) in computer science, engineering, information systems, management technology, or an MBA is preferred Experience leading implementation and ongoing support of ServiceNow Governance Integrated Risk Management (IRM) platform is desired. 7+ years of Product Management (or equivalent) experience. Must have experience in large platform implementation from ideation to rollout Strong background in the Product discipline - Business case creation, roadmaps, prioritization etc. Ability to translate business requirements into technical platform capabilities, roadmaps, solution architectures, and data domains Experience in following areas: definition and design of business, functional and technical requirements; system selection and implementation support; Systems Development Lifecycle (SDLC); Quality Assurance and testing (QA); program/project management and implementation planning (PMO) Good understanding of key risk frameworks such as Risk and Control Self-Assessment (RCSA), risk tolerance and appetite management, control monitoring and testing, risk and performance metrics, issue management, regulatory change management, automated workflows, reporting etc. Experience in at least two risk domains such as Operational Risk, Consumer Compliance, IT/IS risk, Privacy Risk, Third Party Risk, Conduct Risk, etc. Experience with system and application architecture, data integration and analytics. Strong foundation in establishing data models (taxonomies and ontologies) for risk management. Strong communication skills, both verbal and written, at all levels of the organization, effectively leveraging storytelling to drive understanding & alignment Demonstrated ability to think critically and challenge the status quo Experience as people leader with ability to lead global teams Proven success working in a matrix environment Compliance Language American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 1 month ago

Apply

175.0 years

8 - 10 Lacs

Gurgaon

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you’ll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express Join Team Amex and let's lead the way together. How we serve our customers is constantly evolving and is a challenge we gladly accept. Whether you’re finding new ways to prevent identity fraud or enabling customers to start a new business, you can work with one of the most valuable data sets in the world to identify insights and actions that can have a meaningful impact on our customers and our business. And, with opportunities to learn from leaders who have defined the course of our industry, you can grow your career and define your own path. Find your place in risk and analytics on #TeamAmex. The Platforms and Capabilities team within Global Risk and Compliance (GRC) is responsible for building and implementing leading-edge platforms and solutions for risk management. Our vision is to “ provide best-in-class Platforms and Capabilities that enable the risk management framework in GRC and across the Company and empower colleagues to excel at risk management activities.” American Express is on a mission to evolve risk management across all risk domains and stripes (Enterprise Risk, Operational Risk, Compliance Risk, Privacy Risk etc.) A key part of this are the technology solutions and platforms. We are seeking a Director, Digital Product Management, Integrated Risk Management to lead this multi-year effort. This is a newly created role, and the Director will be responsible for suite of solutions within the within Integrated Risk Management (IRM) platform. Responsibilities: Develop and drive the strategic vision for owned modules/component within Integrated Risk Management (IRM) platform(s) which is line with the AXP’s core risk management vision. Connect the vision to that of their respective risk domains Establish a multi-year roadmap for execution and implementation Partner with stakeholders across AXP for the vision, roadmap, planning and execution. Business partners will include risk management organizations across AXP – BU-level Control Management teams (1LOD), risk domain teams in the Independent Risk Management organization (2LOD) and Internal Audit (3LOD), Technology and many others Track and manage execution of multiyear initiative – prioritize and sequence deliverables, host agile ceremonies, manage risks and issues, report status to senior leaders, etc. Manage transition from the existing platform(s) to the new IRM platform(s) Manage the overall platform governance across various AXP functions including prioritization, requirements, and any conflicts that arise Manage the roll-out plans and adoption with various AXP functions including organizational change management As it relates to the end-to-end architecture vision, manage the definition and execution of integrations with various risk and enterprise applications Build, lead, and develop a diverse team of high-performing Risk Management and Product professionals executing against highly complex and critical projects and governance activities Nurture and mentor talent across the team. Qualifications A bachelor's degree in computer science, engineering, information systems, or a related field. An advanced degree (M.S. or Ph.D.) in computer science, engineering, information systems, management technology, or an MBA is preferred Experience leading implementation and ongoing support of ServiceNow Governance Integrated Risk Management (IRM) platform is desired. 7+ years of Product Management (or equivalent) experience. Must have experience in large platform implementation from ideation to rollout Strong background in the Product discipline - Business case creation, roadmaps, prioritization etc. Ability to translate business requirements into technical platform capabilities, roadmaps, solution architectures, and data domains Experience in following areas: definition and design of business, functional and technical requirements; system selection and implementation support; Systems Development Lifecycle (SDLC); Quality Assurance and testing (QA); program/project management and implementation planning (PMO) Good understanding of key risk frameworks such as Risk and Control Self-Assessment (RCSA), risk tolerance and appetite management, control monitoring and testing, risk and performance metrics, issue management, regulatory change management, automated workflows, reporting etc. Experience in at least two risk domains such as Operational Risk, Consumer Compliance, IT/IS risk, Privacy Risk, Third Party Risk, Conduct Risk, etc. Experience with system and application architecture, data integration and analytics. Strong foundation in establishing data models (taxonomies and ontologies) for risk management. Strong communication skills, both verbal and written, at all levels of the organization, effectively leveraging storytelling to drive understanding & alignment Demonstrated ability to think critically and challenge the status quo Experience as people leader with ability to lead global teams Proven success working in a matrix environment American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 1 month ago

Apply

3.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Company: Indian / Global Engineering & Manufacturing Organization Key Skills: ETL/ELT, RDF, OWL, SPARQL, Neo4j, AWS Neptune, ArangoDB, Python, SQL, Cypher, Semantic Modeling, Cloud Data Pipelines, Data Quality, Knowledge Graph, Graph Query Optimization, Semantic Search. Roles and Responsibilities: Design and build advanced data pipelines for integrating structured and unstructured data into graph models. Develop and maintain semantic models using RDF, OWL, and SPARQL. Implement and optimize data pipelines on cloud platforms such as AWS, Azure, or GCP. Model real-world relationships through ontologies and hierarchical graph data structures. Work with graph databases such as Neo4j, AWS Neptune, ArangoDB for knowledge graph development. Collaborate with cross-functional teams including AI/ML and business analysts to support semantic search and analytics. Ensure data quality, security, and compliance throughout the pipeline lifecycle. Monitor, debug, and enhance performance of graph queries and data transformation workflows. Create clear documentation and communicate technical concepts to non-technical stakeholders. Participate in global team meetings and knowledge-sharing sessions to align on data standards and architectural practices. Experience Requirement: 3-8 years of hands-on experience in ETL/ELT engineering and data integration. Experience working with graph databases such as Neo4j, AWS Neptune, or ArangoDB. Proven experience implementing knowledge graphs, including semantic modeling using RDF, OWL, and SPARQL. Strong Python and SQL programming skills, with proficiency in Cypher or other graph query languages. Experience designing and deploying pipelines on cloud platforms (AWS preferred). Track record of resolving complex data quality issues and optimizing pipeline performance. Previous collaboration with data scientists and product teams to implement graph-based analytics or semantic search features. Education: Any Graduation. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

On-site

hackajob is collaborating with American Express to connect them with exceptional tech professionals for this role. American Express is on a mission to evolve risk management across all risk domains and stripes (Enterprise Risk, Operational Risk, Compliance Risk, Privacy Risk etc.) A key part of this are the technology solutions and platforms. We are seeking a Director, Digital Product Management, Integrated Risk Management to lead this multi-year effort. This is a newly created role, and the Director will be responsible for suite of solutions within the within Integrated Risk Management (IRM) platform. Responsibilities Develop and drive the strategic vision for owned modules/component within Integrated Risk Management (IRM) platform(s) which is line with the AXP’s core risk management vision. Connect the vision to that of their respective risk domains Establish a multi-year roadmap for execution and implementation Partner with stakeholders across AXP for the vision, roadmap, planning and execution. Business partners will include risk management organizations across AXP - BU-level Control Management teams (1LOD), risk domain teams in the Independent Risk Management organization (2LOD) and Internal Audit (3LOD), Technology and many others Track and manage execution of multiyear initiative - prioritize and sequence deliverables, host agile ceremonies, manage risks and issues, report status to senior leaders, etc. Manage transition from the existing platform(s) to the new IRM platform(s) Manage the overall platform governance across various AXP functions including prioritization, requirements, and any conflicts that arise Manage the roll-out plans and adoption with various AXP functions including organizational change management As it relates to the end-to-end architecture vision, manage the definition and execution of integrations with various risk and enterprise applications Build, lead, and develop a diverse team of high-performing Risk Management and Product professionals executing against highly complex and critical projects and governance activities Nurture and mentor talent across the team. Qualifications A bachelor's degree in computer science, engineering, information systems, or a related field. An advanced degree (M.S. or Ph.D.) in computer science, engineering, information systems, management technology, or an MBA is preferred Experience leading implementation and ongoing support of at least one industry-leading Governance Risk and Compliance (GRC)/Integrated Risk Management (IRM) platforms such as Diligent, ServiceNow, IBM Open Pages, Metric Stream, Archer, Audit Board etc. is desired 7+ years of Product Management (or equivalent) experience. Must have experience in large platform implementation from ideation to rollout Strong background in the Product discipline - Business case creation, roadmaps, prioritization etc. Ability to translate business requirements into technical platform capabilities, roadmaps, solution architectures, and data domains Experience in following areas: definition and design of business, functional and technical requirements; system selection and implementation support; Systems Development Lifecycle (SDLC); Quality Assurance and testing (QA); program/project management and implementation planning (PMO) Good understanding of key risk frameworks such as Risk and Control Self-Assessment (RCSA), risk tolerance and appetite management, control monitoring and testing, risk and performance metrics, issue management, regulatory change management, automated workflows, reporting etc. Experience in at least two risk domains such as Operational Risk, Consumer Compliance, IT/IS risk, Privacy Risk, Third Party Risk, Conduct Risk, etc. Experience with system and application architecture, data integration and analytics. Strong foundation in establishing data models (taxonomies and ontologies) for risk management. Strong communication skills, both verbal and written, at all levels of the organization, effectively leveraging storytelling to drive understanding & alignment Demonstrated ability to think critically and challenge the status quo Experience as people leader with ability to lead global teams Proven success working in a matrix environment Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications : • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications : • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills. Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-216648 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Engineering Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will manage and oversee the development of robust Data Architectures, Frameworks, Data product Solutions, while mentoring and guiding a small team of data engineers. You will be responsible for leading the development, implementation, and management of enterprise-level data data engineering frameworks and solutions that support the organization's data-driven strategic initiatives. You will continuously strive for innovation in the technologies and practices used for data engineering and build enterprise scale data frameworks and expert data engineers. This role will closely collaborate with counterparts in US and EU. You will collaborate with cross-functional teams, including platform, functional IT, and business stakeholders, to ensure that the solutions that are built align with business goals and are scalable, secure, and efficient. Roles & Responsibilities: Architect & Implement of scalable, high-performance Modern Data Engineering solutions (applications) that include data analysis, data ingestion, storage, data transformation (data pipelines), and analytics. Evaluate the new trends in data engineering area and build rapid prototypes Build Data Solution Architectures and Frameworks to accelerate the Data Engineering processes Build frameworks to improve the re-usability, reduce the development time and cost of data management & governance Integrate AI into data engineering practices to bring efficiency through automation Build best practices in Data Engineering capability and ensure their adoption across the product teams Build and nurture strong relationships with stakeholders, emphasizing value-focused engagement and partnership to align data initiatives with broader business goals. Lead and motivate a high-performing data engineering team to deliver exceptional results. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and best practices. Collaborate with counterparts in US and EU and work with business functions, functional IT teams, and others to understand their data needs and ensure the solutions meet the requirements. Engage with business stakeholders to understand their needs and priorities, ensuring that data and analytics solutions built deliver real value and meet business objectives. Drive adoption of the data and analytics solutions by partnering with the business stakeholders and functional IT teams in rolling out change management, trainings, communications, etc. Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Recruitment & Team Expansion: Develop a comprehensive talent strategy that includes recruitment, retention, onboarding, and career development and build a diverse and inclusive team that drives innovation, aligns with Amgen's culture and values, and delivers business priorities Organizational Leadership: Work closely with senior leaders within the function and across the Amgen India site to align engineering goals with broader organizational objectives and demonstrate leadership by contributing to strategic discussions What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 8 to 10 years of computer science and engineering preferred, other Engineering fields will be considered OR Bachelor’s degree and 12 to 14 years of computer science and engineering preferred, other Engineering fields will be considered OR Diploma and 16 to 18 years of computer science and engineering preferred, other Engineering fields will be considered 10+ years of experience in Data Engineering, working in COE development or product building 5+ years of experience in leading enterprise scale data engineering solution development. Experience building enterprise scale data lake, data fabric solutions on cloud leveraging modern approaches like Data Mesh Demonstrated proficiency in leveraging cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Hands-on experience using Databricks, Snowflake, PySpark, Python, SQL Proven ability to lead and develop high-performing data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experience in Integrating AI with Data Engineering and building AI ready data lakes Prior experience in data modeling especially star-schema modeling concepts. Familiarity with ontologies, information modeling, and graph databases. Experience working with agile development methodologies such as Scaled Agile. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Education and Professional Certifications SAFe for Teams certification (preferred) Databricks certifications AWS cloud certification Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Azure Data Engineer with Palantir Foundry Expertise Location: Noida/Gurgaon/Hyderabad/Bangalore/Pune Experience Level: 7+ years (Data Engineering), 2+ years (Palantir Foundry) Role Overview: We are looking for a highly skilled Azure Data Engineer with hands-on expertise in Palantir Foundry to support critical data integration and application development initiatives. The ideal candidate will have a strong foundation in Python, SQL, PySpark , and Azure services, along with proven experience in working across data pipelines, ontologies, and security configurations within the Palantir ecosystem. This role requires both technical acumen and strong communication skills to engage with cross-functional stakeholders, especially in the Oil & Gas engineering context. Key Responsibilities: Azure Data Engineering: Design, develop, and maintain scalable data pipelines using Azure Data Factory , Azure Databricks , SQL , and PySpark . Ensure data quality, integrity, and governance in Azure-based data platforms. Collaborate with Product Managers and Engineering teams to support business needs using data-driven insights. Palantir Foundry Engineering: Data Integration: Build and manage pipelines; perform Python-based transformations; integrate varied source systems using code, repositories, and connections. Model Integration: Work with business logic, templated analyses, and report models to operationalize analytics. Ontology Management: Define object types, relationships, permissions, object views, and custom functions. Application Development: Build and manage Foundry applications using Workshop , Writeback , Advanced Actions , and interface customization. Security & Governance: Implement data foundation principles; manage access control, restricted views, and ensure data protection compliance. Perform ingestion, transformation, and validation within Palantir and maintain seamless integration with Azure services. Mandatory Technical Skills: Strong proficiency in Python , SQL , and PySpark Expert in Azure Databricks , Azure Data Factory , Azure Data Lake Palantir Foundry hands-on experience , with ability to demonstrate skills during interviews Palantir-specific capabilities: Foundry Certifications : Data Engineering & Foundational Pipeline Builder , Ontology Manager , Object Explorer Mesa language (Palantir?s proprietary language) Time Series Data handling Working knowledge of Equipment & Sensor data in the Oil & Gas domain Soft Skills: Strong communication and interpersonal skills Ability to work independently and drive conversations with Product Managers and Engineers Comfortable acting as a voice of authority in cross-functional technical discussions Proven ability to operate and support complex data platforms in a production environment Nice to Have: Experience working with AI/ML models integrated in Foundry Exposure to AIP (Azure Information Protection) or related security tools Experience in Operating & Support functions across hybrid Azure-Palantir environments Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Budaun Sadar, Uttar Pradesh, India

On-site

Job Title: Azure Data Engineer with Palantir Foundry Expertise Location: Noida/Gurgaon/Hyderabad/Bangalore/Pune Experience Level: 7+ years (Data Engineering), 2+ years (Palantir Foundry) Role Overview: We are looking for a highly skilled Azure Data Engineer with hands-on expertise in Palantir Foundry to support critical data integration and application development initiatives. The ideal candidate will have a strong foundation in Python, SQL, PySpark , and Azure services, along with proven experience in working across data pipelines, ontologies, and security configurations within the Palantir ecosystem. This role requires both technical acumen and strong communication skills to engage with cross-functional stakeholders, especially in the Oil & Gas engineering context. Key Responsibilities: Azure Data Engineering: Design, develop, and maintain scalable data pipelines using Azure Data Factory , Azure Databricks , SQL , and PySpark . Ensure data quality, integrity, and governance in Azure-based data platforms. Collaborate with Product Managers and Engineering teams to support business needs using data-driven insights. Palantir Foundry Engineering: Data Integration: Build and manage pipelines; perform Python-based transformations; integrate varied source systems using code, repositories, and connections. Model Integration: Work with business logic, templated analyses, and report models to operationalize analytics. Ontology Management: Define object types, relationships, permissions, object views, and custom functions. Application Development: Build and manage Foundry applications using Workshop , Writeback , Advanced Actions , and interface customization. Security & Governance: Implement data foundation principles; manage access control, restricted views, and ensure data protection compliance. Perform ingestion, transformation, and validation within Palantir and maintain seamless integration with Azure services. Mandatory Technical Skills: Strong proficiency in Python , SQL , and PySpark Expert in Azure Databricks , Azure Data Factory , Azure Data Lake Palantir Foundry hands-on experience , with ability to demonstrate skills during interviews Palantir-specific capabilities: Foundry Certifications : Data Engineering & Foundational Pipeline Builder , Ontology Manager , Object Explorer Mesa language (Palantir?s proprietary language) Time Series Data handling Working knowledge of Equipment & Sensor data in the Oil & Gas domain Soft Skills: Strong communication and interpersonal skills Ability to work independently and drive conversations with Product Managers and Engineers Comfortable acting as a voice of authority in cross-functional technical discussions Proven ability to operate and support complex data platforms in a production environment Nice to Have: Experience working with AI/ML models integrated in Foundry Exposure to AIP (Azure Information Protection) or related security tools Experience in Operating & Support functions across hybrid Azure-Palantir environments Show more Show less

Posted 1 month ago

Apply

1.0 years

0 Lacs

Calcutta

On-site

About iMerit: iMerit (https://imerit.net) is a multinational company that delivers annotation, classification, and content moderation data to power AI, Machine Learning, and data operation strategies of many of the leading AI organizations in the world. Our work encompasses a client’s journey from exploratory R&D to proof of concept to mission-critical, production-ready solutions. We leverage advanced tools, machine learning algorithms and workflow best practices to clean, enrich, and annotate large volumes of unstructured data and unlock hidden value. In our human empowered computing model, technology solves for throughput, while our managed workforce teams (across delivery centers in India, Bhutan, and the US) solve for accuracy through their deep expertise in Computer Vision, Natural Language Processing and Content Services, and across verticals such as Autonomous Vehicles, Healthcare, Finance, Geospatial technologies and many more. iMerit also creates inclusive and diverse employment in the digital IT sector - around 80% of our workforce are sourced from various impact communities and >50% are women. Role/Experience/Education: 12-month full-time contract position, annotating or labeling medical terms from different medical documents & clinical encounters to produce a dataset for machine learning purposes. Requires a degree in nursing, pharmacy, social work or medicine One year of clinical experience is preferred/Freshers also can apply Experience with medical billing and/or transcription of prescriptions/reports/other relevant medical documents a plus. Passion for improving lives through healthcare & a great work ethic. Experience in reading clinical notes, extracting meaningful pieces of clinical information and coding medical terms to different medical ontologies. (SNOMED, LOINC, RxNorm) Strong ability to understand the medical history of any patient. Excellent English reading comprehension & communication skills. Computer Literacy Ability to work night shifts Okay to work from the office Benefits: Good Compensation Exposure to working with innovative companies in healthcare & AI Drop Facility Job Type: Contractual / Temporary Contract length: 12 months Pay: From ₹20,000.00 per month Application Question(s): Are you okay to work in night shift for 5 days in a week from office? Education: Bachelor's (Preferred) Experience: total work: 1 year (Preferred) Work Location: In person

Posted 1 month ago

Apply

5.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Job Title: Conversational AI Engineer / NLP & LLM Integration Specialist Experience Required: 2–5 years About the Role We are seeking a highly motivated Conversational AI Engineer to join our team in building intelligent, AI-powered dialogue systems, chatbots, and document-based Q&A tools. This role requires a deep understanding of NLP, LLM (Large Language Model) integration, and the ability to build AI pipelines that convert unstructured data into meaningful responses. The ideal candidate should have strong experience with Python, ML libraries, and API-driven architectures. This is a hands-on, research-to-deployment role . You will play a key part in shaping the product’s intelligence and language capabilities — from designing conversation flows to building semantic search systems and custom logic engines. Key Responsibilities Design and implement AI-driven chatbots and Q&A systems using NLP and LLMs. Perform prompt engineering for LLMs like OpenAI, Claude, or open-source models. Handle Excel/CSV/JSON data parsing to structure domain-specific knowledge bases. Build and integrate information retrieval and semantic search features. Apply intent classification , category detection , and dialogue management techniques. Deploy and optimize backend services that serve AI responses through APIs. Collaborate closely with designers and product teams to build domain-aware, user-friendly conversations . (Optional) Integrate or build astrology-based models or rule-based prediction engines . Required Skills Strong experience in Natural Language Processing (NLP) . Familiarity with LLM integration , prompt engineering , and RAG architectures . Practical knowledge of Machine Learning / Deep Learning . Strong programming skills in Python . Experience working with structured and unstructured data (CSV, JSON, Excel). Familiarity with Conversational AI tools (e.g., Rasa, Dialogflow, or custom-built systems). Understanding of information retrieval, semantic search , and vector embeddings . Ability to write clean, production-ready code and deploy services using Flask/FastAPI . Solid understanding of user intent classification and conversational flow design. Preferred Qualifications Experience building chatbots , virtual assistants, or AI-driven user interfaces. Prior exposure to astrology or rule-based prediction engines (a plus but not required). Familiarity with knowledge graphs or domain-specific ontologies. Hands-on experience with vector stores (e.g., Pinecone , FAISS , Weaviate ) and embedding techniques. Ability to work independently and communicate ideas clearly and effectively. Location : Mohali, Punjab Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Role: AI Engineer Employment Type: Full-Time, Permanent Location: On-site, Bengaluru Vacancies: Multiple Positions Company Overview EmployAbility.AI is an AI-driven career enablement platform dedicated to transforming how individuals and organizations navigate the future of work. We are committed to democratizing access to meaningful employment by using advanced artificial intelligence, real-time labor market data, and intelligent career pathways to bridge the gap between skills and opportunities. Our platform empowers job seekers with personalized career insights, learning recommendations, and job-matching tools while enabling organizations to make smarter hiring and workforce development decisions. By aligning talent capabilities with market demand, we help create a more inclusive, adaptive, and future-ready workforce. At EmployAbility.AI, we’re not just building software—we’re building solutions that make employability equitable, data-driven, and scalable. Job Role: AI Engineer As an AI Engineer at EmployAbility.AI, you will be at the forefront of building intelligent systems that power the platform’s core functionality—from LLM-based recommendations to intelligent search and contextual assistants. You will develop and deploy state-of-the-art AI models and pipelines using LLMs , LangChain , and Retrieval-Augmented Generation (RAG) to deliver impactful, real-world solutions. You will work closely with a cross-functional team of developers, data scientists, and product managers to create scalable, production-ready AI features that enhance user experiences and drive measurable value across industries and regions. Key Responsibilities Design and develop AI-driven features including conversational agents, recommendation engines, and smart search using LLMs. Build and integrate LangChain-based applications that leverage RAG pipelines for improved reasoning and contextual understanding. Fine-tune, evaluate, and optimize transformer models (BERT, GPT, LLaMA, etc.) for domain-specific use cases. Work with unstructured and semi-structured data (e.g., resumes, job descriptions, labor market datasets). Develop embedding-based search using tools like FAISS , Pinecone , or Weaviate . Collaborate with backend and frontend teams to integrate AI services via scalable APIs. Perform data preprocessing, feature engineering, and model evaluation. Monitor performance of deployed models and iterate based on feedback and metrics. Participate in prompt engineering, experiment tracking, and continuous optimization of AI systems. Stay updated on the latest trends in AI/ML and contribute to internal knowledge sharing. Education Requirements B.Tech/B.E, M.Tech, MCA, M.Sc, MS, or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Core Technical Skills – AI Engineering Large Language Models & NLP Experience with LLMs and transformer-based architectures (e.g., GPT, BERT, LLaMA). Hands-on with LangChain framework and RAG (Retrieval-Augmented Generation) workflows. Proficiency in prompt engineering , embedding models, and semantic search. Experience using Hugging Face Transformers , OpenAI API , or open-source equivalents. Vector Stores & Knowledge Retrieval Experience with FAISS , Pinecone , or Weaviate for similarity search. Implementation of document chunking, embedding pipelines, and vector indexing. ML/AI Development Strong skills in Python and ML libraries (PyTorch, TensorFlow, Scikit-learn). Familiar with NLP tasks like named entity recognition, text classification, and summarization. Experience with API development and deploying AI models into production environments. Tooling & Development Practices Version control with Git , collaborative workflows via GitHub Experiment tracking with MLflow , Weights & Biases , or equivalent API testing tools (Postman, Swagger), and JSON schema validation Use of Jupyter notebooks for experimentation and prototyping Deployment & DevOps (Basic Understanding) Containerization using Docker , basic orchestration knowledge is a plus Cloud environments: familiarity with AWS , GCP , or Azure CI/CD workflows (GitHub Actions, Jenkins) Monitoring tools for model performance and error tracking (Sentry, Prometheus, etc.) Soft Skills & Work Habits Strong problem-solving and analytical thinking Ability to work cross-functionally with technical and non-technical teams Clear and concise communication of complex AI concepts Team collaboration and willingness to mentor peers or juniors Agile/Scrum practices using tools like Jira, Trello, and Confluence Bonus Skills (Good to Have) TypeScript or JavaScript for frontend or integration work Knowledge of GraphQL , chatbot development , or multi-modal AI Familiarity with AutoML , RLHF , or explainable AI Experience with knowledge graphs , ontologies , or custom taxonomies Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture. Outcomes Deliver technically sound projects across one / multiple customers within the guidelines of the customer and UST standards and norms based on user stories Guide and review technical delivery by internal teams at the program level Resolve architecture issues deliver and own architecture of application solutions spanning across multiple technologies for projects of the following nature - high revenue projects complex projects and large strategic maintenance projects Architect frameworks tools relevant for the program Measures Of Outcomes Business Development (number of proposals contributed to); revenue contribution (COE) Delivery efficiency (Delivery) Audit reviews on reuse of technology Number of processes / frameworks/tools reused Number of Trainings Webinars Blogs interviews Number of white papers/document assets published / working prototypes Number of reusable frameworks/tools/artifacts created Technology certifications Customer feedback on overall technical quality (zero technology related escalations) Number of reviews and audits Domain certifications (e.g.: LOMA) (Delivery) Outputs Expected Knowledge Management & Capability Development: Publish and maintain a repository of solutions best practices and standards and other knowledge articles Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology in area of expertise Support technical skill building (including hiring and training) for the team based on inputs from Project Manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (Delivery) Create architecture on-boarding/KT documents for the program Requirement Gathering And Analysis Work with customer/business owners and other teams to collect analyze and understand the requirements including NFRs; define NFRs Analyze gaps/ and trade-offs based on current system context and industry practice while clarifying the requirements and by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of technical specialists / team engineers Provide career guidance and mentor technical specialists Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements specific to program In collaboration with Architect 2 3 create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with the Solution Architects analyze application and technology landscape process and tools to arrive at the architecture options best fit for the client program Analyze Cost vs. benefits of solution options Support Solution Architects to create a technology/ architecture roadmap for the client Define architectural strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc.) to showcase UST capabilities and insights under guidance of senior team members Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Support the patent filing process for the IP assets created (applicable to some CoEs) Sales Support And Project Estimation Support for developing RFPs and collaterals for proposals from a technical architecture estimation and risks perspective Conduct demos and arrange for the demos based on client profiles if required Anchor proposal development with cross-linkages across multiple competency units to arrive at a coherent solution proposal with focus on presenting unique value propositions and clear differentiators Architecture Solution Definition & Design Develop and Enhance the architecture (application / technical / Infrastructure as applicable) meeting functional and non functional requirements; aligned to industry best practices Program Design (including data modeling application design infrastructure design team structure)and capacity sizing working with Program Release Train Engineer to meet the requirements and SLAs of target state and in-transition as applicable Identify Proof of Concept testing (POC) needs and conduct POCs as applicable Identify need for developing accelerators or frameworks and develop as applicable specific to the engagement Identify key technical metrics to measure the SLA / requirements compliance Define adopt and create required documentation on standards and guidelines Contribute to re-usable frameworks tools and artefacts library Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers and RTE’s; and external stakeholders like client architects on architecture aspects. Follow through on commitments and achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at the team and program levels Business And Technical Research Analyze market trends / client requirements/ secondary research Identify new ideas and provide inputs to Solution Architects for developing blueprint / PoC Brand Building Organize and participate in events like webinars boot camps seminar conference and client conference to showcase capability Identify opportunities with Solution Architects to cross-solutionize across project(s) and programs within or outside the BU Asset Development And Governance Understand the plan for asset development Design assets or artefacts as per plan (if required) Conduct pilot run/review to validate the assets artefacts as feasible Track utilization of reusable assets / architecture components/blueprints across the organization Create case studies of program/project working with internal stakeholders Skill Examples Use domain and industry knowledge in order to understand business requirements create POC to meet business requirements and contextualize the solution to the industry under guidance. Create architecture interact with SMEs at various stages of the development translate business requirements to system requirements and perform impact analysis of changes in requirements Use Technology Knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work and understand the IT strategy for the project. Create white papers under guidance help with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made; review solution under guidance Use knowledge of Technology Trends to provide inputs on potential areas of opportunity for UST. Provide inputs to the specialist on creation of technology roadmap for the client research on new products / trends / best practices Use knowledge of Architecture Concepts and Principles to create architecture catering to functional and non-functional requirements under guidance of the specialist. Re-engineer existing architecture solutions under the guidance of the specialist; provide training on best practices in architecture under guidance Use knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Independently evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities; review the low level design / high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) and maintainability of design Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development to come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST. Identify risks and mitigation strategies Use knowledge of Project Governance Framework to support development of the communication protocols escalation matrix and reporting mechanisms for small / medium projects/ program as defined within UST Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics; share it with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules / small projects with detailed requirements or user stories in place Use knowledge of Knowledge Management Tools & Techniques to leverage existing material and re-usable assets in knowledge repository. Independently create and update knowledge artefacts; create and track project specific KT plans. Provide training to others write white papers/ blogs at internal level write technical documents/ user understanding documents at the end of the project Use knowledge of Technical Standards Documentation & Templates to create documentation appropriate for the project needs. Create documentation appropriate for the reusable assets/ best practices/ case studies Use knowledge of Requirement Gathering and Analysis to support creation of requirements documents or user stories and high level process maps. Identify gaps on the basis of business process analyse responses to clarification questions to produce design documents. Create/review estimates and solutions at project level/program level create/review design artefacts update resourcing and schedule based on impacted areas identified; creating design specifically for the non-functional requirements Use knowledge of Solution Structuring to carve out simple solution/ POC for a customer based on their needs review the proposal for completeness Knowledge Examples Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end to end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.) Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.) Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) UST specific estimation templates Working knowledge of industry knowledge management tools (such as portals wiki) UST and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT) Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications) Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs Solution Structuring: Demonstrates working knowledge of service offering and products Additional Comments Job Description: ServiceNow Solution Architect Basic Programming Skills and Concepts: Understanding of Programming Fundamentals and Java Script: o Data types, Control structures, Functions and modules. o Object-Oriented Programming (OOP) concepts (classes, objects, inheritance, polymorphism) o Syntax and semantics o DOM manipulation and events o Asynchronous programming (callbacks, promises, async/await) o Error handling and debugging Familiarity with HTML, CSS, and XML Web Development/Front-End Skills: Understanding of Web Development Concepts like Responsive web design (RWD), Single-page applications (SPAs) & client-side rendering, and Web performance optimization (WPO) and accessibility. Understanding of Front-End Frameworks and Libraries: o React, Angular, or Vue.js o Bootstrap or other CSS frameworks ServiceNow Specific Skills: In-Depth Knowledge of ServiceNow Development: o ServiceNow architecture and data model o Table and field structures o Business rules, workflows, and approvals o Scripting (JavaScript, Groovy, etc.) o Integration with external systems (REST, SOAP, etc.) Experience with ServiceNow Development Tools: o Studio (application development, scripting, and debugging) o Update Sets (managing and deploying changes) o ServiceNow APIs (REST, SOAP, etc.) Expertise with at Least 3 ServiceNow Modules and Applications like ITSM, ITAM, ITOM, CSM, FSM, HRSD, SPM. Solution Architect Specific Skills: Architectural Leadership: o Lead the design and implementation of complex ServiceNow solutions, including custom applications and integrations. o Develop architectural blueprints, roadmaps, and solution designs to meet business requirements. Technical Strategy: o Align ServiceNow solutions with organizational goals and strategic initiatives. o Identify emerging trends in ServiceNow and related technologies to drive innovation. Process Optimization: o Design and implement process improvements and best practices to optimize the use of ServiceNow. o Drive standardization and consistency across ServiceNow implementations. Stakeholder Collaboration: o Work closely with business leaders, IT teams, and stakeholders to gather requirements and translate them into technical solutions. o Provide technical leadership and guidance to development teams and ensure alignment with architectural standards. Governance and Compliance: o Ensure solutions comply with organizational policies, standards, and regulations. o Establish governance frameworks for managing ServiceNow development and deployment. Database Skills: Understanding of Database Concepts Proficiency in Database Management Systems Experience with Database Design and Development Information Architecture Skills: Understanding of Information Architecture Principles: o Taxonomy and ontology o Content modeling and metadata o User experience (UX) and user interface (UI) design Experience with Information Architecture Tools and Techniques: o Creating and managing taxonomies and ontologies o Designing and implementing content models and metadata schemas o Developing and maintaining information architecture documentation and standards Familiarity with Information Architecture Frameworks and Methodologies: o Zachman Framework o TOGAF o Information Architecture Institute (IAI) standards Additional Technical Requirements: Experience with Agile Development Methodologies: o Scrum or Kanban Familiarity with Version Control Systems: o Git Knowledge of Cloud-Based Technologies: o AWS or Azure Understanding of DevOps Practices: o Continuous integration and continuous deployment Experience with IT Service Management Frameworks: o ITIL Familiarity with Security Best Practices and Compliance Regulations Certification in ServiceNow Development or Administration Nice to Have: Experience with Other Programming Languages like Java or Python Knowledge of Machine Learning and Artificial Intelligence Concepts Familiarity with Containerization and Orchestration Soft Skills: Ability to troubleshoot and resolve complex technical issues , Ability to work effectively with both technical and non-technical stakeholders , Ability to work in a fast-paced environment and adapt to changing priorities and requirements , Commitment to delivering high-quality solutions Skills Servicenow,Javascript,Rest,Soap Show more Show less

Posted 1 month ago

Apply

0.0 - 1.0 years

0 Lacs

Mohali, Punjab

On-site

Job Title: Conversational AI Engineer / NLP & LLM Integration Specialist Experience Required: 2–4 years About the Role We are seeking a highly motivated Conversational AI Engineer to join our team in building intelligent, AI-powered dialogue systems, chatbots, and document-based Q&A tools. This role requires a deep understanding of NLP, LLM (Large Language Model) integration, and the ability to build AI pipelines that convert unstructured data into meaningful responses. The ideal candidate should have strong experience with Python, ML libraries, and API-driven architectures. This is a hands-on, research-to-deployment role . You will play a key part in shaping the product’s intelligence and language capabilities — from designing conversation flows to building semantic search systems and custom logic engines. Key Responsibilities Design and implement AI-driven chatbots and Q&A systems using NLP and LLMs. Perform prompt engineering for LLMs like OpenAI, Claude, or open-source models. Handle Excel/CSV/JSON data parsing to structure domain-specific knowledge bases. Build and integrate information retrieval and semantic search features. Apply intent classification , category detection , and dialogue management techniques. Deploy and optimize backend services that serve AI responses through APIs. Collaborate closely with designers and product teams to build domain-aware, user-friendly conversations . (Optional) Integrate or build astrology-based models or rule-based prediction engines . Required Skills Strong experience in Natural Language Processing (NLP) . Familiarity with LLM integration , prompt engineering , and RAG architectures . Practical knowledge of Machine Learning / Deep Learning . Strong programming skills in Python . Experience working with structured and unstructured data (CSV, JSON, Excel). Familiarity with Conversational AI tools (e.g., Rasa, Dialogflow, or custom-built systems). Understanding of information retrieval, semantic search , and vector embeddings . Ability to write clean, production-ready code and deploy services using Flask/FastAPI . Solid understanding of user intent classification and conversational flow design. Preferred Qualifications Experience building chatbots , virtual assistants, or AI-driven user interfaces. Prior exposure to astrology or rule-based prediction engines (a plus but not required). Familiarity with knowledge graphs or domain-specific ontologies. Hands-on experience with vector stores (e.g., Pinecone , FAISS , Weaviate ) and embedding techniques. Ability to work independently and communicate ideas clearly and effectively. Job Type: Full-time Pay: ₹15,000.00 - ₹40,000.00 per month Benefits: Paid sick time Schedule: Day shift Experience: AI: 1 year (Preferred) Location: Mohali, Punjab (Preferred) Work Location: In person

Posted 1 month ago

Apply

5.0 years

4 - 5 Lacs

Hyderābād

On-site

Lead Knowledge Engineer Hyderabad, India Data Management 311636 Job Description About The Role: Grade Level (for internal use): 11 The Role : The Knowledge Engineering team are seeking a Lead Knowledge Engineer to support our strategic transformation from a traditional data organization into a next generation interconnected data intelligence organization. The Team : The Knowledge Engineering team within data strategy and governance helps to lead fundamental organizational and operational change driving our linked data, open data, and data governance strategy, both internally and externally. The team partners closely with data and software engineering to envision and build the next generation of data architecture and tooling with modern technologies. The Impact : Knowledge Engineering efforts occur within the broader context of major strategic initiatives to extend market leadership and build next-generation data, insights and analytics products that are powered by our world class datasets. What’s in it for you : The Lead Knowledge Engineer role is an opportunity to work as an individual contributor in creatively solving complex challenges alongside visionary leadership and colleagues. It’s a role with highly visible initiatives and outsized impact. The wider division has a great culture of innovation, collaboration, and flexibility with a focus on delivery. Every person is respected and encouraged to be their authentic self. Responsibilities : Develop, implement, and continue to enhance ontologies, taxonomies, knowledge graphs, and related semantic artefacts for interconnected data, as well as topical/indexed query, search, and asset discovery Design and prototype data / software engineering solutions enabling to scale the construction, maintenance and consumption of semantic artefacts and interconnected data layer for various application contexts Provide thought leadership for strategic projects ensuring timelines are feasible, work is effectively prioritized, and deliverables met Influence the strategic semantic vision, roadmap, and next-generation architecture Execute on the interconnected data vision by creating linked metadata schemes to harmonize semantics across systems and domains Analyze and implement knowledge organization strategies using tools capable of metadata management, ontology management, and semantic enrichment Influence and participate in governance bodies to advocate for the use of established semantics and knowledge-based tools Qualifications: Able to communicate complex technical strategies and concepts in a relatable way to both technical and non-technical stakeholders and executives to effectively persuade and influence 5+ years of experience with ontology development, semantic web technologies (RDF, RDFS, OWL, SPARQL) and open-source or commercial semantic tools (e.g., VocBench, TopQuadrant, PoolParty, RDFLib, triple stores); Advanced studies in computer science, knowledge engineering, information sciences, or related discipline preferred 3+ years of experience in advanced data integration with semantic and knowledge graph technologies in complex, enterprise-class, multi-system environment(s); skilled in all phases from conceptualization to optimization Programming skills in a mainstream programming language (Python, Java, JavaScript), with experience in utilizing cloud services (AWS, Google Cloud, Azure) is a great bonus Understanding of the agile development life cycle and the broader data management discipline (data governance, data quality, metadata management, reference and master data management) S&P Global Enterprise Data Organization is a unified, cross-divisional team focused on transforming S&P Global’s data assets. We streamline processes and enhance collaboration by integrating diverse datasets with advanced technologies, ensuring efficient data governance and management. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 10 - Officials or Managers (EEO-2 Job Categories-United States of America), DTMGOP103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 311636 Posted On: 2025-05-14 Location: Hyderabad, Telangana, India

Posted 1 month ago

Apply

50.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Lead Financial Data Advisor What Makes Us, Us Join some of the most innovative thinkers in FinTech as we lead the evolution of financial technology. If you are an innovative, curious, collaborative person who embraces challenges and wants to grow, learn and pursue outcomes with our prestigious financial clients, say Hello to SimCorp! At its foundation, SimCorp is guided by our values — caring, customer success-driven, collaborative, curious, and courageous. Our people-centered organization focuses on skills development, relationship building, and client success. We take pride in cultivating an environment where all team members can grow, feel heard, valued, and empowered. If you like what we’re saying, keep reading! WHY THIS ROLE IS IMPORTANT TO US SimCorp DMS is a complete data management solution for pricing and reference data, provided as a Managed Service. The SimCorp DMS Advisory and change management service is a team of industry and data management experts who consistently monitor for any industry, regulatory and vendor feed changes that may affect our client’s business operations. The Advisory Service regularly informs clients of these changes, and how best to adapt. As a Senior Data Advisor, you will be responsible to look after the client onboarding, monitoring vendor changes, maintaining the standard data model and orchestrating change management activities for the SimCorp DMS clients by liaising with internal service teams from IT and Operations. You will be directly reporting to the head of Data Advisory team What You Will Be Responsible For Create and maintain data standards (lookups, mappings, conventions, and rules), data models and data ontologies (how are data objects related to each other) Maintain meta data information documentation regarding data models, data interpretation, data lineage and utilization along the key processes and make it available to internal and external stakeholders. Monitor, track and conduct assessment of all vendors related changes that impacts SimCorp DMS and Enterprise Data Manager clients. Facilitate Change & Control Board (CCB) governance with internal service teams to prioritize and deliver the standard and client change requests. Design and implement a robust client onboarding framework for a seamless DMS client onboarding Serve as the advisor and quality gateway regarding data and change integrations related to data. Support design and maintenance of an overall data test concepts and conduct regular data tests to assure data and service quality. Support DMS Operations on various process improvement initiatives. Advise clients on data needs and data usage for new requirements. Build and maintain in-depth know-how on industry and regulatory changes within data management. This is a client-centric role that involves regular interaction with our valued customers Continuously develop a comprehensive understanding of the latest industry and regulatory trends in the data management space and promote best practices within the SimCorp community to benefit our clients. What We Value Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important. We expect you to be good at several of the following and be able to - and interested in - learning the rest. Have a relevant bachelor’s degree or equivalent in finance Relevant experience in areas of Change Management with data vendor feeds (Bloomberg, IDC, Reuters, etc.) Have good knowledge of financial products and markets including derivatives, structured instruments etc. Have experience in requirements engineering, data modelling and full SDLC approach. Have a structured approach, attention to detail, service mindset, and solid organizational and interpersonal skills. Process management, control and design experience are beneficial skills but not mandatory. Have relevant experience in Data Management and Business Analysis Possess strong knowledge of various investment/trade lifecycle activities Show the ability to collaborate easily with various departments to ensure client success. Exhibit clear verbal and written communication to engage and inform various stakeholders Benefits At SimCorp, we believe in rewarding and supporting our employees. We offer an attractive salary package, including a robust bonus scheme, comprehensive healthcare (medical insurance, pension plans, and free transportation), and a major emphasis on work-life balance with flexible work hours and a hybrid work model. We are proud to be recognized as a "Great Place to Work" and are committed to encourage individual growth with personalized development plans and opportunities for career advancement. NEXT STEPS Please send us your application in English via our career site as soon as possible, we process incoming applications continually. Please note that only applications sent through our system will be processed. At SimCorp, we recognize that bias can unintentionally occur in the recruitment process. To uphold fairness and equal opportunities for all applicants, we kindly ask you to exclude personal data such as photo, age, or any non-professional information from your application. Thank you for aiding us in our endeavor to mitigate biases in our recruitment process. For any questions you are welcome to contact Swati Pal (Swati.pal@Simcorp.com) , Talent Acquisition Partner, at email address. If you are interested in being a part of SimCorp but are not sure this role is suitable, submit your CV anyway. SimCorp is on an exciting growth journey, and our Talent Acquisition Team is ready to assist you discover the right role for you. The approximate time to consider your CV is three weeks. We are eager to continually improve our talent acquisition process and make everyone’s experience positive and valuable. Therefore, during the process we will ask you to provide your feedback, which is highly appreciated. Who We Are For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds. SimCorp is an independent subsidiary of the Deutsche Börse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. SimCorp is an equal-opportunity employer. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications : • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications : • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills. Show more Show less

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AI Engineer – Generative AIJob Summary At EXL, we are developing solutions for some of the most exciting and challenging AI and NLP business problems. If you want to push the existing AI (NLP) knowledge boundaries, then we have an excellent opportunity for you. We are looking for AI Engineers who can develop and deploy state of the art, scalable, and optimized AI system particularly in Generative AI driven by Large Language Models. As an AI Engineer, you are expected to have keen interest in AI - NLP and Machine Learning and staying current with the latest developments in this rapidly changing field. Major duty is to develop and package Generative AI solutions for business problems using Large Language Models and related technologies like Vector DBs, Embedders, Memory, Prompt Managers, response managers etc. in-production. Job Responsibilities Analyze text-based data and apply data science techniques to extract meaningful insights. Design, develop and implement LLM based solutions for a wide range of use cases involving tasks like classification, extraction, summarization, conversation, search, generation etc. on unstructured text data. Create and maintain state of the art scalable AI solutions in Python/ Java/ Scala for multiple business problems. This involves: Choosing most appropriate NLP model based on business needs and available data Performing data exploration and innovative feature engineering Fine-tuning a variety of NLP models including LLMs Augmenting models by integrating domain specific ontologies and/or external databases Reporting and Monitoring the solution outcome Work experience with document-oriented databases such as MongoDB, vector DBs etc. Collaborate with Engineering team to deploy AI(NLP) solutions in production. Ability to package AI solution in APIs. Experience with developing and deploying AI systems on cloud like AWS or Azure Interact with clients and internal business teams to perform solution feasibility as well as design and develop solutions. Open to working across different domains – Insurance, Healthcare and Financial Services etc. Education & Qualifications Number of Years of Work Experience: 2-4 years Required Skills Experience (including graduate school) on training and deploying NLP models- LLMs and related technologies. Expertise in transformer-based state of the NLP techniques Experience with working on tools for generative AI development stack like lang chain, LlamaIndex, chroma etc. Applied experience of machine learning algorithms using Python/Java/ Scala Organized, self-motivated, disciplined and detail oriented. Production level coding experience in Python/Java is required. Ability to Stay up-to-date with the latest advancements in Generative AI and Large Language Models research Experience with one or more of deep learning framework, including Pytorch, Tensorflow, Caffe, MxNet Experience with using cloud technologies on AWS/ Microsoft Azure is a plus What We Offer EXL Digital offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class AI/ML and NLP consultants. You do not just spend your time performing data crunching, doing research and coding but develop yourself as a seasoned data science expert with applied industry knowledge. You can expect to learn many aspects of businesses that our clients engage in. We provide you an opportunity to work on a variety of use cases across different industries – Insurance, Healthcare, Banking Finance and many more Projects drive you to go beyond lab setup and make real impact on live use cases through production deployments You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth AI/ML and NLP requires different skill sets at different levels within the organization. At EXL Digital, we invest heavily in training you in all aspects including AI/ML/NLP and cloud tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications : • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies