Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
3 - 8 Lacs
Hyderābād
Remote
DESCRIPTION The role is based in Munich, Germany (this is not a remote opportunity). We offer immigration and relocation support. The vision of the Ontology Product Knowledge Team is to provide a standardized, semantically rich, easily discoverable, extensible, and universally applicable body of product knowledge that can be consistently utilized across customer shopping experiences, selling partner listing experiences and internal enrichment of product data. We aim to make product knowledge compelling, easy to use, and feature rich. Our work to build comprehensive product knowledge allows us to semantically understand a customer’s intent – whether that is a shopping mission or a seller offering products. We strive to make these experiences more intuitive for all customers. As an Ontologist, you work on a global team of knowledge builders to deliver world-class, intuitive, and comprehensive taxonomy and ontology models to optimize product discovery for Amazon web and mobile experiences. You collaborate with business partners and engineering teams to deliver knowledge-based solutions to enable product discoverability for customers. In this role, you will directly impact the customer experience as well as the company’s product knowledge foundation. Tasks and Responsibilities: Develop logical, semantically rich, and extensible data models for Amazon's extensive product catalog Ensure our ontologies provide comprehensive domain coverage that are available for both human and machine ingestion and inference Create new schema using Generative Artificial Intelligence (generative AI) models Analyze website metrics and product discovery behaviors to make data-driven decisions on optimizing our knowledge graph data models globally Expand and refine the expansion of data retrieval techniques to utilize our extensive knowledge graph Contribute to team goal setting and future state vision Drive and coordinate cross-functional projects with a broad range of merchandisers, engineers, designers, and other groups that may include architecting new data solutions Develop team operational excellence programs, data quality initiatives and process simplifications Evangelize ontology and semantic technologies within and across teams at Amazon Develop and refine data governance and processes used by global Ontologists Mentor and influence peers Inclusive Team Culture: Our team has a global presence: we celebrate diverse cultures and backgrounds within our team and our customer base. We are committed to furthering our culture of inclusion, offering continuous access to internal affinity groups as well as highlighting diversity programs. Work/Life Harmony: Our team believes that striking the right balance between work and your outside life is key. Our work is not removed from everyday life, but instead is influenced by it. We offer flexibility in working hours and will work with you to facilitate your own balance between your work and personal life. Career Growth: Our team cares about your career growth, from your initial company introduction and training sessions, to continuous support throughout your entire career at Amazon. We recognize each team member as an individual, and we will build on your skills to help you grow. We have a broad mix of experience levels and tenures, and we are building an environment that celebrates knowledge sharing. Perks: You will have the opportunity to support CX used by millions of customers daily and to work with data at a scale very few companies can offer. We have offices around the globe, and have the opportunity to be considered for global placement. You’ll receive on the job training and group development opportunities. BASIC QUALIFICATIONS Degree in Library Science, Information Systems, Linguistics or equivalent professional experience 5+ years of relevant work experience working in ontology and/or taxonomy roles Proven skills in data retrieval and data research techniques Ability to quickly understand complex processes and communicate them in simple language Experience creating and communicating technical requirements to engineering teams Ability to communicate to senior leadership (Director and VP levels) Experience with generative AI (e.g. creating prompts) Knowledge of Semantic Web technologies (RDF/s, OWL), query languages (SPARQL) and validation/reasoning standards (SHACL, SPIN) Knowledge of open-source and commercial ontology engineering editors (e.g. Protege, TopQuadrant products, PoolParty) Detail-oriented problem solver who is able to work in fast-changing environment and manage ambiguity Proven track record of strong communication and interpersonal skills Proficient English language skills PREFERRED QUALIFICATIONS Master’s degree in Library Science, Information Systems, Linguistics or other relevant fields Experience building ontologies in the e-commerce and semantic search spaces Experience working with schema-level constructs (e.g. higher-level classes, punning, property inheritance) Proficiency in SQL, SPARQL Familiarity with software engineering life cycle Familiarity with ontology manipulation programming libraries Exposure to data science and/or machine learning, including graph embedding Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 hours ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About RocketFrog.ai: RocketFrog.ai is an AI Studio for Business, engineering competitive advantage through cutting-edge AI solutions in Healthcare, Pharma, BFSI, Hi-Tech, and Consumer Services. From Agentic AI and deep learning models to full-stack AI-first product development, we help enterprises translate innovation into measurable business impact. 🚀 Ready to take a Rocket Leap with Science? Role Overview: We are seeking a structured, analytical, and forward-thinking AI Product Analyst with 3–5 years of experience in the software industry. This role is ideal for individuals who excel at making sense of complexity—organizing fragmented information, designing schemas, and building structured representations to power Agentic AI products. You’ll work at the intersection of knowledge modeling, intelligent workflow design, and prompt engineering to drive next-gen AI solutions. Key Responsibilities: Break down complex and ambiguous business contexts into organized frameworks using schemas, taxonomies, or mind maps. Lead the creation of meta-models and domain ontologies to support structured AI understanding and reasoning. Apply principles of Description Logic to represent domain knowledge, support inference, and enhance explainability. Collaborate with domain experts and technical teams to convert functional needs into machine-readable structures. Contribute to prompt engineering and LLM-based system design, enabling generative and reasoning capabilities. Translate business objectives into AI-ready workflows, use cases, and knowledge blueprints. Build process maps, concept hierarchies, and user interaction flows using tools like Figma and Miro. Bridge business and engineering efforts across the SDLC, ensuring alignment on structured information use. Drive user alignment through compelling narratives in PowerPoint or dashboard formats. Monitor deployed AI solutions using well-defined KPIs and structured user feedback loops to recommend enhancements. Required Skills & Expertise: Prompt Engineering & AI Fluency (Very Important): Hands-on experience in designing effective prompts and working with LLMs for reasoning or automation tasks. Meta-Modelling & Schema Design (Mandatory): Strong experience creating information schemas, ontologies, and taxonomies to support AI and system design. Familiarity with Description Logic or semantic web standards (OWL, RDF, SHACL) is highly desirable. Information Structuring (Very Important): Proven ability to organize unstructured information into structured, navigable forms like mind maps or decision trees. Business Process Mapping: Comfortable using swimlane diagrams, BPMN, or similar tools to document and optimize enterprise workflows. Software Industry Experience(Compulsory): Sound knowledge of Agile methodologies and the software development lifecycle. Communication & Storytelling: Exceptional communication skills with experience crafting narratives for stakeholders using PowerPoint and visual tools. Tools & Collaboration Platforms: Proficient in Miro, Figma, JIRA, Confluence; familiarity with Excel and data visualization tools is a plus. Analytical Thinking: Strong conceptual thinking and problem-solving abilities with a bias toward structured, AI-enabled decision-making. Domain Knowledge (Preferred): Background or experience in domains like Rail, Shipping, Logistics, BFSI, or Healthcare is a plus. Key Stakeholders CxOs & Business Leaders – Strategic decision-makers and data consumers Process Owners – Operational stakeholders and subject matter experts Technology Teams – Engineering, AI, and product delivery teams Qualifications Bachelor’s or Master’s degree from IIT, IIM, or other Tier-1 institutions 3 to 5 years of experience as a Business Analyst, Product Analyst, or Product Manager in the software industry Proven success in knowledge modeling, process optimization, or AI-led transformation initiatives Immediate joiners preferred Why Join RocketFrog.ai? Shape the future of Agentic AI and knowledge-driven enterprise systems Work on real-world, high-impact AI projects across multiple industries Be part of a deeply technical and innovation-driven team of AI scientists and engineers Contribute to the next frontier of symbolic + generative AI integration Thrive in a culture that values clarity, curiosity, and continuous learning
Posted 2 hours ago
7.0 years
0 Lacs
India
Remote
Role: Neo4j Engineer Overall IT Experience: 7+ years Relevant experience: (Graph Databases: 4+ years, Neo4j: 2+ years) Location: Remote Company Description Bluetick Consultants is a technology-driven firm that supports hiring remote developers, building technology products, and enabling end-to-end digital transformation. With previous experience in top technology companies such as Amazon, Microsoft, and Craftsvilla, we understand the needs of our clients and provide customized solutions. Our team has expertise in emerging technologies, backend and frontend development, cloud development, and mobile technologies. We prioritize staying up-to-date with the latest technological advances to create a long-term impact and grow together with our clients. Key Responsibilities • Graph Database Architecture: Design and implement Neo4j graph database schemas optimized for fund administration data relationships and AI-powered queries • Knowledge Graph Development: Build comprehensive knowledge graphs connecting entities like funds, investors, companies, transactions, legal documents, and market data • Graph-AI Integration: Integrate Neo4j with AI/ML pipelines, particularly for enhanced RAG (Retrieval-Augmented Generation) systems and semantic search capabilities • Complex Relationship Modeling: Model intricate relationships between Limited Partners, General Partners, fund structures, investment flows, and regulatory requirements • Query Optimization: Develop high-performance Cypher queries for real-time analytics, relationship discovery, and pattern recognition • Data Pipeline Integration: Build ETL processes to populate and maintain graph databases from various data sources including FundPanel.io, legal documents, and external market data using domain specific ontologies • Graph Analytics: Implement graph algorithms for fraud detection, risk assessment, relationship scoring, and investment opportunity identification • Performance Tuning: Optimize graph database performance for concurrent users and complex analytical queries • Documentation & Standards: Establish graph modelling standards, query optimization guidelines, and comprehensive technical documentation Key Use Cases You'll Enable • Semantic Search Enhancement: Create knowledge graphs that improve AI search accuracy by understanding entity relationships and context • Investment Network Analysis: Map complex relationships between investors, funds, portfolio companies, and market segments • Compliance Graph Modelling: Model regulatory relationships and fund terms to support automated auditing and compliance validation • Customer Relationship Intelligence: Build relationship graphs for customer relations monitoring and expansion opportunity identification • Predictive Modelling Support: Provide graph-based features for investment prediction and risk assessment models • Document Relationship Mapping: Connect legal documents, contracts, and agreements through entity and relationship extraction Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, or related field • 7+ years of overall IT Experience • 4+ years of experience with graph databases, with 2+ years specifically in Neo4j • Strong background in data modelling, particularly for complex relationship structures • Experience with financial services data and regulatory requirements preferred • Proven experience integrating graph databases with AI/ML systems • Understanding of knowledge graph concepts and semantic technologies • Experience with high-volume, production-scale graph database implementations Technology Skills • Graph Databases: Neo4j (primary), Cypher query language, APOC procedures, Neo4j Graph Data Science library • Programming: Python, Java, or Scala for graph data processing and integration • AI Integration: Experience with graph-enhanced RAG systems, vector embeddings in graph context, GraphRAG implementations • Data Processing: ETL pipelines, data transformation, real-time data streaming (Kafka, Apache Spark) • Cloud Platforms: Neo4j Aura, Azure integration, containerized deployments • APIs: Neo4j drivers, REST APIs, GraphQL integration • Analytics: Graph algorithms (PageRank, community detection, shortest path, centrality measures) • Monitoring: Neo4j monitoring tools, performance profiling, query optimization • Integration: Elasticsearch integration, vector database connections, multi-modal data handling Specific Technical Requirements • Knowledge Graph Construction: Entity resolution, relationship extraction, ontology modelling • Cypher Expertise: Advanced Cypher queries, stored procedures, custom functions • Scalability: Clustering, sharding, horizontal scaling strategies • Security: Graph-level security, role-based access control, data encryption • Version Control: Graph schema versioning, migration strategies • Backup & Recovery: Graph database backup strategies, disaster recovery planning Industry Context Understanding • Fund Administration: Understanding of fund structures, capital calls, distributions, and investor relationships • Financial Compliance: Knowledge of regulatory requirements and audit trails in financial services • Investment Workflows: Understanding of due diligence processes, portfolio management, and investor reporting • Legal Document Structures: Familiarity with LPA documents, subscription agreements, and fund formation documents Collaboration Requirements • AI/ML Team: Work closely with GenAI engineers to optimize graph-based AI applications • Data Architecture Team: Collaborate on overall data architecture and integration strategies • Backend Developers: Integrate graph databases with application APIs and microservices • DevOps Team: Ensure proper deployment, monitoring, and maintenance of graph database infrastructure • Business Stakeholders: Translate business requirements into effective graph models and queries Performance Expectations • Query Performance: Ensure sub-second response times for standard relationship queries • Scalability: Support 100k+ users with concurrent access to graph data • Accuracy: Maintain data consistency and relationship integrity across complex fund structures • Availability: Ensure 99.9% uptime for critical graph database services • Integration Efficiency: Seamless integration with existing FundPanel.io systems and new AI services This role offers the opportunity to work at the intersection of advanced graph technology and artificial intelligence, creating innovative solutions that will transform how fund administrators understand and leverage their data relationships.
Posted 1 day ago
8.0 years
20 - 40 Lacs
India
On-site
Role: Senior Graph Data Engineer (Neo4j & AI Knowledge Graphs) Experience: 8+ years Type: Contract We’re hiring a Graph Data Engineer to design and implement advanced Neo4j-powered knowledge graph systems for our next-gen AI platform. You'll work at the intersection of data engineering, AI/ML, and financial services , helping build the graph infrastructure that powers semantic search, investment intelligence, and automated compliance for venture capital and private equity clients. This role is ideal for engineers who are passionate about graph data modeling , Neo4j performance , and enabling AI-enhanced analytics through structured relationships. What You'll Do Design Knowledge Graphs: Build and maintain Neo4j graph schemas modeling complex fund administration relationships — investors, funds, companies, transactions, legal docs, etc. Graph-AI Integration: Work with GenAI teams to power RAG systems, semantic search, and graph-enhanced NLP pipelines. ETL & Data Pipelines: Develop scalable ingestion pipelines from sources like FundPanel.io, legal documents, and external market feeds using Python, Spark, or Kafka. Optimize Graph Performance: Craft high-performance Cypher queries, leverage APOC procedures, and tune for real-time analytics. Graph Algorithms & Analytics: Implement algorithms for fraud detection, relationship scoring, compliance, and investment pattern analysis. Secure & Scalable Deployment: Implement clustering, backups, and role-based access on Neo4j Aura or containerized environments. Collaborate Deeply: Partner with AI/ML, DevOps, data architects, and business stakeholders to translate use cases into scalable graph solutions. What You Bring 7+ years in software/data engineering; 2+ years in Neo4j and Cypher. Strong experience in graph modeling, knowledge graphs, and ontologies. Proficiency in Python, Java, or Scala for graph integrations. Experience with graph algorithms (PageRank, community detection, etc.). Hands-on with ETL pipelines, Kafka/Spark, and real-time data ingestion. Cloud-native experience (Neo4j Aura, Azure, Docker/K8s). Familiarity with fund structures, LP/GP models, or financial/legal data a plus. Strong understanding of AI/ML pipelines, especially graph-RAG and embeddings. Use Cases You'll Help Build AI Semantic Search over fund documents and investment entities. Investment Network Analysis for GPs, LPs, and portfolio companies. Compliance Graphs modeling fund terms and regulatory checks. Document Graphs linking LPAs, contracts, and agreements. Predictive Investment Models enhanced by graph relationships. Skills: java,machine learning,spark,apache spark,neo4j aura,ai,azure,cloud-native technologies,data,ai/ml pipelines,scala,python,cypher,graphs,ai knowledge graphs,graph data modeling,apoc procedures,semantic search,etl pipelines,data engineering,neo4j,etl,cypher query,pipelines,graph schema,kafka,kafka streams,graph algorithms
Posted 1 day ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Mandatory Skill Sets At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services Preferred Skill Sets Palantir Foundry Years Of Experience Required Experience 4 to 7 years ( 3 + years relevant) Education Qualification Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role We are seeking a highly experienced and strategic AI/ML Architect to lead the design, development, and deployment of scalable artificial intelligence and machine learning solutions. As a core member of our technical leadership team, you will play a pivotal role in building intelligent systems that drive innovation and transform digital healthcare delivery across our AI Driven Telemedicine platform. Key Responsibilities Architect AI/ML systems that support key business goals, from real-time diagnosis and predictive analytics to natural language conversations and recommendation engines. Design and oversee machine learning pipelines, model training, validation, deployment, and performance monitoring. Guide selection of ML frameworks (e.g., TensorFlow, PyTorch) and ensure proper MLOps practices (CI/CD, model versioning, reproducibility, drift detection). Collaborate with cross-functional teams (data engineers, product managers, UI/UX, backend developers) to integrate AI into real-time applications and APIs. Build and maintain scalable AI infrastructure, including data ingestion, storage, and processing layers in cloud environments (AWS, GCP, or Azure). Lead research and experimentation on generative AI, NLP, computer vision, and deep learning techniques relevant to healthcare use cases. Define data strategies, governance, and model explainability/ethics frameworks to ensure compliance with regulatory standards like HIPAA. Mentor and lead a growing team of ML engineers and data scientists. Qualifications Must-Have: Bachelor's or Master’s degree in Computer Science, AI, Data Science, or related field (PhD preferred). 7+ years of experience in AI/ML development, with at least 2 years in a lead or architect role. Proven experience designing and deploying production-grade ML systems at scale. Strong grasp of ML algorithms, deep learning, NLP, computer vision, and generative AI. Expertise in Python, ML libraries (TensorFlow, PyTorch, Scikit-learn), and ML Ops tools (MLflow, Kubeflow, SageMaker, etc.). Familiarity with data engineering pipelines (Airflow, Spark, Kafka) and cloud platforms. Strong communication and collaboration skills. Preferred: Experience with healthcare data standards (FHIR, HL7) and medical ontologies (SNOMED CT, ICD). Familiarity with AI ethics, fairness, and interpretability frameworks. Startup or early-stage product development experience.
Posted 2 days ago
12.0 - 15.0 years
0 Lacs
Vishakhapatnam, Andhra Pradesh, India
On-site
We are seeking an enthusiastic professional to manage a team of healthcare professionals normalize a large volume of healthcare data into standard large medical ontologies. The appropriate candidate will have experience managing large teams; defining, measuring and leading towards successful achievement of Key Performance Indicators (KPIs); and working collaboratively with clients. Experience in large-scale healthcare data operations and services will be extremely valuable. Role Full-time position as a Project Manager for medical data projects, including working with different medical data types to produce datasets for machine learning purposes. Responsibilities Ensure that all projects are delivered on-time, within scope and within budget Coordinate internal resources and third parties/vendors for the flawless execution of projects Develop a detailed project plan to track progress Report and escalate to management as needed Manage the relationship with the client and all stakeholders Perform risk management to minimize project liabilities Create and maintain comprehensive project documentation Experience/Education Minimum 12 to 15 years of experience in scribing/transcription/coding experience in Medical documentation. Experience in Medical transcription proof reading and scribing Experience in reviewing the summary of the physician-patient encounter and clinical content of the conversation captured by the team members. Experience in multiple specialty documentation. Knowledge of medical terminology, AHDI guidelines and procedures. Understanding of Patient history and diagnosis, prescription writing, medical abbreviations. Clinical education or training is considered a plus (e.g. Pharmacy, Nursing, Medicine. Medical transcription or scribing certification) Skills Strong ability to understand the medical concepts Good listening and comprehension skills of medical audio recordings. Excellent English reading comprehension & communication skills. Computer Literacy Passion for improving lives through healthcare & a great work ethic. Flexible to work night shifts. Benefits: Strong Compensation Exposure to working with innovative companies in healthcare & AI Growth and Leadership Opportunities Collaborative, International teamwork About iMerit: iMerit is a well-funded, rapidly expanding global leader in data services. iMerit’s dedicated Medical Division works with the world’s largest pharmaceutical companies, medical device manufacturers, and hospital networks to supply the data that powers advances in Artificial Intelligence. At iMerit, we have successfully delivered services powering cutting edge technologies such as digital radiology, digital pathology, clinical decision support, and autonomous robotic surgery.
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Sr. Specialist, Product Experience Design (UX Designer / Researcher) Summary We are looking for an experienced User Experience/Interaction Designer passionate about the possibilities of data and data-driven experience design with good knowledge and experience of User Interface Design to join our rapidly growing, highly-innovative data and services Product Development team at our development center in Pune. The design function works as part of wider cross-functional teams that form organically around data product ideas – collectively contributing to the rapid iteration of Prototypes and MVPs (Minimal Viable Product), mainly in the B2B and B2B2C arenas. Each design iteration is exposed to end-users at each cycle of our rapid design-test-redesign process, to ensure optimum user traction. Prototype and MVP outputs include dashboards; widgets; chatbots; web applications; mobile applications; VUI voice interfaces; digital assistants; etc. – all in service of solving complex business challenges and with an ever-increasing emphasis on Artificial Intelligence (AI) and Machine Learning (ML). Please make sure you have included your portfolio link in your resume which can be accessible and is not restricted. Role Design of Proof-of-Concepts and Minimum Viable Data Products (MVPs) using human-centric design principles and rapid prototyping (Lean UX and Design Thinking) Participate in "Design Thinking" workshops – collaborating cross-functionally with industry verticals, regional leads and end-users to ensure optimal digital data products Explore the “art-of-the-possible” in the era of Big Data, Artificial Intelligence (AI) and Machine Learning (ML) – whilst always maintaining regulatory compliance Leverage existing – and contribute net new – design patterns to Mastercard’s design pattern library Help define the next generation of data and data-driven products – and through doing so, help shape the future of Mastercard and its growth Creation of a cohesive and compelling visual language across diverse form factors: web, mobile and internet of things Work on future state conceptual designs, driving experimentation that improves the quality of product design overall. Work closely with key partners from brand & marketing, alongside the broader user experience team to drive delightful and highly usable transactional experiences leveraging the broader visual language for MasterCard. Work closely with our technology team to define implementation standard for our products leveraging modern presentation layer practices such as adaptive/responsive web and current and forward-thinking technologies. Ensure that designs deliver an appropriate balance of optimizing business objectives & user engagement, in close partnership with product managers. Liaise with regional and country teams to ensure that designs are reflective of the diversity of needs for a global user base Drive VXD design across a broader ecosystem, where experiences will require tailoring to meet the needs of both MasterCard customers and end consumers Help define the next generation of data and data-driven products and their visualization – and through doing so, help shape the future of Mastercard and its growth Interaction Design Required Experience / Knowledge / Skills (Core) Overall 5-7 years of career experience Experience developing User Archetypes and Personas and user journeys to inform product design decisions Experience in rapid prototyping (Lean UX and Design Thinking) Experience in articulating elegant and engaging experiences using sketches, storyboards, information architecture blueprints and prototypes Experience in the implementation of creative, useable and compelling visual mockups and prototypes. Experience working with complex information architectures Experience of design-experiences across multiple media Experience using prototyping/wireframing tools such as Figma, Sketch, Adobe Experience Design, Adobe Illustrator etc. An understanding of complex information architectures for digital applications Additional Experience / Knowledge / Skills Experience in articulating elegant and engaging visual experiences using sketches, storyboards, and within prototypes Highly proficient in the creation of useable, compelling and elegant visual mockups and prototypes. Experience of visual design across multiple media and form factors: web; mobile; Internet-of-Things (IOT) Extensive and demonstrable experience using leading VXD software packages such as Adobe Creative Cloud / Suite (Photoshop, InDesign, Illustrator etc.) / Experience Design, Serif DrawPlus, Corel DRAW Graphics Suite / PaintShopPro, Art Rage, Xara or equivalent An understanding of complex information architectures for visual representation Any experience in motion graphic design is a strong plus Any experience in 3D modeling also a strong plus Research Experience consuming UX research Experience in leading “Design Thinking” workshops with customers to identify requirements and ideate on potential product solutions. Ability to identify best-in-class user experience through competitor analysis Experience of iterative “design-test-re-design” methodology to collect real user feedback to incorporate back into the design Candidate Prior experience working in a world-beating UX team Experience working/multi-tasking in an extremely fast-paced startup-like environment Experience in client-facing engagements, preferably in leading them Empathetic champion of the user – passionate about the detail of great usability, interaction design and aesthetics to give the best possible UX Passionate about the possibility of data driven experiences – so-called “emergent ontologies” (patterns in the data) driving UX and UI. Passionate about the potential for data, Artificial Intelligence (AI) and Machine Learning ML) Defensible point-of-view on and Adaptive v Responsive v Liquid/Fluid UI Defensible point-of-view on gamification Interest in VUIs (Voice Interfaces) Demonstrable knowledge of UX and Interaction Design heuristics and best practices: e.g. Lean UX; Mobile First etc. Demonstrable knowledge of ergonomic and usability best practices Bachelors or Masters Degree in Design for Interactive Media, or equivalent experience World-beating portfolio covering multiple form-factors: desktop; tablet; mobile; wearable; other Demonstrable commitment to learning: insatiable to discover and evaluate new concepts and technologies to maximize design possibility. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Specialist, Product Experience Design Senior Specialist - Customer Experience Designer Job Description Summary We are looking for an experienced User Experience/Interaction Designer passionate about the possibilities of data and data-driven experience design with good knowledge and experience of User Interface Design to join our rapidly growing, highly-innovative data and services Product Development team at our development center in Pune. The design function works as part of wider cross-functional teams that form organically around data product ideas – collectively contributing to the rapid iteration of Prototypes and MVPs (Minimal Viable Product), mainly in the B2B and B2B2C arenas. Each design iteration is exposed to end-users at each cycle of our rapid design-test-redesign process, to ensure optimum user traction. Prototype and MVP outputs include dashboards; widgets; chatbots; web applications; mobile applications; VUI voice interfaces; digital assistants; etc. – all in service of solving complex business challenges and with an ever increasing emphasis on Artificial Intelligence (AI) and Machine Learning (ML). Role Design of Proof-of-Concepts and Minimum Viable Data Products (MVPs) using human-centric design principles and rapid prototyping (Lean UX and Design Thinking) Participate in "Design Thinking" workshops – collaborating cross-functionally with industry verticals, regional leads and end-users to ensure optimal digital data products Explore the “art-of-the-possible” in the era of Big Data, Artificial Intelligence (AI) and Machine Learning (ML) – whilst always maintaining regulatory compliance Leverage existing – and contribute net new – design patterns to Mastercard’s design pattern library Help define the next generation of data and data-driven products – and through doing so, help shape the future of Mastercard and its growth Creation of a cohesive and compelling visual language across diverse form factors: web, mobile and internet of things Work on future state conceptual designs, driving experimentation that improves the quality of product design overall. Work closely with key partners from brand & marketing, alongside the broader user experience team to drive delightful and highly usable transactional experiences leveraging the broader visual language for MasterCard. Work closely with our technology team to define implementation standard for our products leveraging modern presentation layer practices such as adaptive/responsive web and current and forward thinking technologies. Ensure that designs deliver an appropriate balance of optimizing business objectives & user engagement, in close partnership with product managers. Liaise with regional and country teams to ensure that designs are reflective of the diversity of needs for a global user base Drive VXD design across a broader ecosystem, where experiences will require tailoring to meet the needs of both MasterCard customers and end consumers Help define the next generation of data and data-driven products and their visualization – and through doing so, help shape the future of Mastercard and its growth Interaction Design Required Experience / Knowledge / Skills (Core) Experience developing User Archetypes and Personas and user journeys to inform product design decisions Experience in rapid prototyping (Lean UX and Design Thinking) Experience in articulating elegant and engaging experiences using sketches, storyboards, information architecture blueprints and prototypes Experience in the implementation of creative, useable and compelling visual mockups and prototypes. Experience working with complex information architectures Experience of design-experiences across multiple media Experience using prototyping/wireframing tools such as Figma, Sketch, Adobe Experience Design, Adobe Illustrator etc. An understanding of complex information architectures for digital applications Visual Design Additional Experience / Knowledge / Skills Experience in articulating elegant and engaging visual experiences using sketches, storyboards, and within prototypes Highly proficient in the creation of useable, compelling and elegant visual mockups and prototypes. Experience of visual design across multiple media and form factors: web; mobile; Internet-of-Things (IOT) Extensive and demonstrable experience using leading VXD software packages such as Adobe Creative Cloud / Suite (Photoshop, InDesign, Illustrator etc.) / Experience Design, Serif DrawPlus, Corel DRAW Graphics Suite / PaintShopPro, Art Rage, Xara or equivalent An understanding of complex information architectures for visual representation Any experience in motion graphic design is a strong plus Any experience in 3D modeling also a strong plus Research Experience consuming UX research Experience in leading “Design Thinking” workshops with customers to identify requirements and ideate on potential product solutions. Ability to identify best-in-class user experience through competitor analysis Experience of iterative “design-test-re-design” methodology to collect real user feedback to incorporate back into the design Candidate Prior experience working in a world-beating UX team Experience working/multi-tasking in an extremely fast-paced startup-like environment Experience in client-facing engagements, preferably in leading them Empathetic champion of the user – passionate about the detail of great usability, interaction design and aesthetics to give the best possible UX Passionate about the possibility of data driven experiences – so-called “emergent ontologies” (patterns in the data) driving UX and UI. Passionate about the potential for data, Artificial Intelligence (AI) and Machine Learning ML) Defensible point-of-view on and Adaptive v Responsive v Liquid/Fluid UI Defensible point-of-view on gamification Interest in VUIs (Voice Interfaces) Demonstrable knowledge of UX and Interaction Design heuristics and best practices: e.g. Lean UX; Mobile First etc. Demonstrable knowledge of ergonomic and usability best practices Bachelors or Masters Degree in Design for Interactive Media, or equivalent experience World-beating portfolio covering multiple form-factors: desktop; tablet; mobile; wearable; other Demonstrable commitment to learning: insatiable to discover and evaluate new concepts and technologies to maximize design possibility. #AI3 Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 3 days ago
0.0 - 3.0 years
12 - 24 Lacs
Chennai, Tamil Nadu
On-site
We are looking for a forward-thinking Data Scientist with expertise in Natural Language Processing (NLP), Large Language Models (LLMs), Prompt Engineering, and Knowledge Graph construction. You will be instrumental in designing intelligent NLP pipelines involving Named Entity Recognition (NER), Relationship Extraction, and semantic knowledge representation. The ideal candidate will also have practical experience in deploying Python-based APIs for model and service integration. This is a hands-on, cross-functional role where you’ll work at the intersection of cutting-edge AI models and domain-driven knowledge extraction. Key Responsibilities: Develop and fine-tune LLM-powered NLP pipelines for tasks such as NER, coreference resolution, entity linking, and relationship extraction. Design and build Knowledge Graphs by structuring information from unstructured or semi-structured text. Apply Prompt Engineering techniques to improve LLM performance in few-shot, zero-shot, and fine-tuned scenarios. Evaluate and optimize LLMs (e.g., OpenAI GPT, Claude, LLaMA, Mistral, or Falcon) for custom domain-specific NLP tasks. Build and deploy Python APIs (using Flask/Fast API) to serve ML/NLP models and access data from graph database. Collaborate with teams to translate business problems into structured use cases for model development. Understanding custom ontologies and entity schemas for corresponding domain. Work with graph databases like Neo4j or similar DBs and query using Cypher or SPARQL. Evaluate and track performance using both standard metrics and graph-based KPIs. Required Skills & Qualifications: Strong programming experience in Python and libraries such as PyTorch, TensorFlow, spaCy, scikit-learn, Hugging Face Transformers, LangChain, and OpenAI APIs. Deep understanding of NER, relationship extraction, co-reference resolution, and semantic parsing. Practical experience in working with or integrating LLMs for NLP applications, including prompt engineering and prompt tuning. Hands-on experience with graph database design and knowledge graph generation. Proficient in Python API development (Flask/FastAPI) for serving models and utilities. Strong background in data preprocessing, text normalization, and annotation frameworks. Understanding of LLM orchestration with tools like LangChain or workflow automation. Familiarity with version control, ML lifecycle tools (e.g., MLflow), and containerization (Docker). Nice to Have: Experience using LLMs for Information Extraction, summarization, or question answering over knowledge bases. Exposure to Graph Embeddings, GNNs, or semantic web technologies (RDF, OWL). Experience with cloud-based model deployment (AWS/GCP/Azure). Understanding of retrieval-augmented generation (RAG) pipelines and vector databases (e.g., Chroma, FAISS, Pinecone). Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,400,000.00 per year Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Natural Language Processing (NLP): 3 years (Preferred) Language: English & Tamil (Preferred) Location: Chennai, Tamil Nadu (Preferred) Work Location: In person
Posted 3 days ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
[Role Name : IS Architecture] Job Posting Title: Data Architect Workday Job Profile : Principal IS Architect Department Name: Digital, Technology & Innovation Role GCF: 06A About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Architect is a senior-level position responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Architect drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. This role will manage a team of Data Modelers. Roles & Responsibilities: Provide oversight to data modeling team members. Develop and maintain conceptual logical, and physical data models and to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Evaluate emerging technologies and assess their potential impact. Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: [GCF Level 6A] Doctorate Degree and 8 years of experience in Computer Science, IT or related field OR Master’s degree with 12 - 15 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 14 - 17 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills : Data Modeling: Expert in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management: Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Information Governance: Familiarity with policies and procedures for managing information assets, including security, privacy, and compliance. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 4 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Senior AI Research Scientist Location: Sector 63, Gurgaon – On‑site Working Days: Monday to Saturday (2nd and 4th Saturdays are working) Working Hours: 10:30 AM – 8:00 PM Experience: 6–10 years in applied AI/ML research, with multiple publications or patents and demonstrable product impact Apply: careers@darwix.ai Subject Line: Application – Senior AI Research Scientist – [Your Name] About Darwix AI Darwix AI is a GenAI SaaS platform that powers real‑time conversation intelligence, multilingual coaching, and behavioural analytics for large revenue and service teams. Our products— Transform+ , Sherpa.ai , and Store Intel —integrate speech‑to‑text, LLM‑driven analysis, real‑time nudging, and computer vision to improve performance across BFSI, real estate, retail, and healthcare enterprises such as IndiaMart, Wakefit, Bank Dofar, GIVA, and Sobha. Role Overview The Senior AI Research Scientist will own the end‑to‑end research agenda that advances Darwix AI’s core capabilities in speech, natural‑language understanding, and generative AI. You will design novel algorithms, convert them into deployable prototypes, and collaborate with engineering to ship production‑grade features that directly influence enterprise revenue outcomes. Key ResponsibilitiesResearch Leadership Formulate and drive a 12‑ to 24‑month research roadmap covering multilingual speech recognition, conversation summarisation, LLM prompt optimisation, retrieval‑augmented generation (RAG), and behavioural scoring. Publish internal white papers and, where strategic, peer‑reviewed papers or patents to establish technological leadership. Model Development & Prototyping Design and train advanced models (e.g., Whisper fine‑tunes, Conformer‑RNN hybrids, transformer‑based diarisation, LLM fine‑tuning with LoRA/QLoRA). Build rapid prototypes in PyTorch or TensorFlow; benchmark against latency, accuracy, and compute cost targets relevant to real‑time use cases. Production Transfer Work closely with backend and MLOps teams to convert research code into containerised, scalable inference micro‑services. Define evaluation harnesses (WER, BLEU, ROUGE, accuracy, latency) and automate regression tests before every release. Data Strategy Lead data‑curation efforts: multilingual audio corpora, domain‑specific fine‑tuning datasets, and synthetic data pipelines for low‑resource languages. Establish annotation guidelines, active‑learning loops, and data quality metrics. Cross‑Functional Collaboration Act as the principal technical advisor in customer POCs involving custom language models, domain‑specific ontologies, or privacy‑sensitive deployments. Mentor junior researchers and collaborate with product managers on feasibility assessments and success metrics for AI‑driven features. Required Qualifications 6–10 years of hands‑on research in ASR, NLP, or multimodal AI, including at least three years in a senior or lead capacity. Strong publication record (top conferences such as ACL, INTERSPEECH, NeurIPS, ICLR, EMNLP) or patents showing applied innovation. Expert‑level Python and deep‑learning fluency (PyTorch or TensorFlow); comfort with Hugging Face, OpenAI APIs, and distributed training. Proven experience delivering research outputs into production systems with measurable business impact. Solid grasp of advanced topics: sequence‑to‑sequence modelling, attention mechanisms, LLM alignment, speaker diarisation, vector search, on‑device optimisation. Preferred Qualifications Experience with Indic or Arabic speech/NLP, code‑switching, or low‑resource language modelling. Familiarity with GPU orchestration, Triton inference servers, TorchServe, or ONNX runtime optimisation. Prior work on enterprise call‑centre datasets, sales enablement analytics, or real‑time speech pipelines. Doctorate (PhD) in Computer Science, Electrical Engineering, or a closely related field from a Tier 1 institution. Success Metrics Reduction of transcription error rate and/or inference latency by agreed percentage targets within 12 months. Successful deployment of at least two novel AI modules into production with adoption across Tier‑1 client accounts. Internal citation and reuse of developed components in other product lines. Peer‑recognised technical leadership through mentoring, documentation, and knowledge sharing. Application Process Send your résumé (and publication list, if separate) to careers@darwix.ai with the subject line indicated above. Optionally, include a one‑page summary of a research project you transitioned from lab to production, detailing the problem, approach, and measured impact. Joining Darwix AI as a Senior AI Research Scientist means shaping the next generation of real‑time, multilingual conversational intelligence for enterprise revenue teams worldwide. If you are passionate about applied research that moves the business needle, we look forward to hearing from you.
Posted 5 days ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do Let’s do this. Let’s change the world. In this vital role you will manage and oversee the development of robust Data Architectures, Platforms, Frameworks, Data product Solutions, while mentoring and guiding a small team of data engineers & architects. You will be responsible for leading the development, implementation, and management of enterprise-level data engineering frameworks and solutions that support the organization's data-driven strategic initiatives. You will continuously strive for innovation in the technologies and practices used for data engineering and build enterprise scale data frameworks and expert data engineers. This role will closely collaborate with counterparts in US and EU. You will collaborate with cross-functional teams, including platform, functional IT, and business stakeholders, to ensure that the solutions that are built align with business goals and are scalable, secure, and efficient. Roles & Responsibilities: Architect & Implement scalable, high-performance, Enterprise Scale Modern Data Platforms & applications that include data analysis, data ingestion, storage, data transformation (data pipelines), and analytics. Evaluate the new trends & features in data platforms area and build rapid prototypes Build Data Solution Architectures and Frameworks to accelerate the Data Engineering processes Build frameworks to improve the re-usability, reduce the development time and cost of data management & governance Integrate AI into data engineering practices to bring efficiency through automation Build best practices in Data Platforms capability and ensure their adoption across the product teams Build and nurture strong relationships with stakeholders, emphasizing value-focused engagement and partnership to align data initiatives with broader business goals. Lead and motivate a high-performing data platforms team to deliver exceptional results. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and best practices. Collaborate with counterparts in US and EU and work with business functions, functional IT teams, and others to understand their data needs and ensure the solutions meet the requirements. Engage with business stakeholders to understand their needs and priorities, ensuring that data and analytics solutions built deliver real value and meet business objectives. Drive adoption of the data and analytics platforms & Solutions by partnering with the business stakeholders and functional IT teams in rolling out change management, trainings, communications, etc. Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Recruitment & Team Expansion: Develop a comprehensive talent strategy that includes recruitment, retention, onboarding, and career development and build a diverse and inclusive team that drives innovation, aligns with Amgen's culture and values, and delivers business priorities Organizational Leadership: Work closely with senior leaders within the function and across the Amgen India site to align engineering goals with broader organizational objectives and demonstrate leadership by contributing to strategic discussions What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: 12 to 17 years with computer science and engineering preferred, other Engineering fields will be considered 10+ years of experience in building Data Platforms, Data Engineering, working in COE development or product building 5+ years of Hands-on experience working with Big Data Platforms & Solutions using AWS and Databricks 5+ years of experience in leading enterprise scale data engineering solution development. Experience building enterprise scale data lake, data fabric solutions on cloud leveraging modern approaches like Data Mesh Demonstrated proficiency in leveraging cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Hands-on experience using Databricks, PySpark, Python, SQL Proven ability to lead and develop high-performing data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experience in Integrating AI with Data Platforms & Engineering and building AI ready data platforms Prior experience in data modeling especially star-schema modeling concepts. Familiarity with ontologies, information modeling, and graph databases. Experience working with agile development methodologies such as Scaled Agile. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Education and Professional Certifications SAFe for Teams certification (preferred) Databricks certifications AWS cloud certification Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 84065 Date: Jul 25, 2025 Location: Delhi CEC Designation: Consultant Entity: Deloitte South Asia LLP Amidst, landing rockets, self driving cars, talking Alexas and blazing fast communication, past two decades have ushered in massive changes into our lives. Enough watching from the sidelines already ! If you have your sleeves rolled up and are raring to be a change-maker yourself, we’d love to hear from you. We seek exceptionally motivated, obsessively passionate and radically creative self starter minds. Minds that’ll have rare opportunities to peek into pivotal challenges faced by the leadership of key brands across sectors & global locations. Your Role Face client and gather high resolution requirements with ease. Repeatedly educate internal and external stakeholders on design process - own selling design as-much as creating it. Work towards deadlines, within a dynamic environment and across time-zones. Accurately scope and plan deliverables and time-lines. Open to iterative critique, feedback and co-creative work-flows with internal & external stakeholders. Clear & impersonal defense of the “why” of design decisions. Manage evolving client requirements - High levels of resilience and self-motivation. Excellent communication skills - both written and verbal, to articulate complex & technical issues. Earnest and diligent UI testing leading to obsessive filing and followup's of bugs/changes. Understand and speak the language of VD, Functional and Dev teams for smooth hand-overs. Collaborate across teams within an agile delivery framework. Our Requirement An equal passion towards technology & aesthetics. Very strong hands-on understanding of the design process and deliverables at every stage - system-maps, mind-maps, personas, journey-maps, wire-frames, style guides, VD assets, Spec docs etc. Keenly aware of latest UX/UI design patterns, frameworks, libraries and guidelines across web, ios, android, windows etc. Very strong Information design & Information architecture skills - taxonomies, ontologies, facets etc. Very strong understanding of fundamentals of form - Harmony, balance, rhythm etc. Very strong Typographic sense & technicalities - know tracking from kerning. Very strong Color sense & theory - know complementary from analogous colors. A gifted sense of layout & composition - positive / negative spaces + eye movement. Hands-on knowledge and experience of publication design & printing techniques. Extremely well versed with latest and historic visual treatments & techniques - impressionist, cubist, modernist, pop, fractal, gradients, long shadows, retro, gloss, textures, tessellations etc . An extremely keen observer & listener; rapidly absorbs tiniest nuances of client’s business challenge; does not shy away from complexity. Ability to profile end user with utmost clarity and detail. Strong lateral thinker; Imagine novel, intuitive solutions for hard UX problems. Aware of nuances of cross device & cross platform solution and delivery - Grids, pixel densities etc. Strong grasp of ‘usability’ and other design gold standards. Keenly aware of nuances of primary / secondary User research, User validation and User testing. Ability to comb through big data, analytics and behavioral insights to generate new experiences and opportunities - Keen interest in big-data visualizations. Experience of designing high complexity interfaces e.g. Retail, Finance, Big data, Banking etc. Extremely proficient in the latest design & rapid prototyping tools such as Sketch, Axure, Adobe suite, InVision, Zeplin, Principle etc. How you’ll grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to help build world-class skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world
Posted 6 days ago
5.0 years
0 Lacs
India
On-site
We are seeking a Fractional AI Data Scientist with deep healthcare analytics experience to support the design of agentic AI workflows , build LLM-powered tools , and structure data pipelines from EHRs, payer systems, and clinical sources. Your work will power intelligent automations for Eligibility Verification , Pre-Authorization , Risk Stratification , and more. You’ll work closely with solution architects, automation engineers, and clinical SMEs to ensure healthcare data is structured, insightful, and responsibly applied in AI contexts. 📌 *Key Responsibilities* Build and fine-tune AI/ML/NLP models tailored to healthcare datasets (structured & unstructured). Design intelligent prompts and evaluation pipelines using LLMs (OpenAI, Azure OpenAI). Work with healthcare data from Epic, Cerner, Availity, and claims sources to build actionable insights. Partner with Azure engineers or Workato specialists to build data-driven agentic workflows. Cleanse and transform healthcare data (FHIR, HL7, CSV, SQL) for modeling and automation triggers. Ensure all solutions comply with HIPAA and ethical AI best practices. Visualize outcomes for business and clinical teams, and document models for reuse. 🧠 *Required Skills & Experience* 5+ years in data science with at least 2+ in healthcare-specific roles. Experience with clinical data (EHR, EMR, payer claims) and healthcare ontologies (ICD-10, CPT, FHIR). Hands-on with LLM tools (OpenAI, LangChain, RAG frameworks) for classification, summarization, or chatbot use cases. Strong proficiency in Python, SQL, Pandas, and ML/NLP frameworks. Familiarity with PHI/PII handling and compliance frameworks like HIPAA. ⭐ *Preferred Qualifications* Azure AI stack (OpenAI, Data Factory, Synapse) Experience in conversational AI, intake automation, or clinical note summarization Worked in or with a digital health, healthtech, or AI startup environment Understanding of automation platforms (Workato, Power Automate) 🛠️ *Tech Stack* Languages: Python, SQL, PySpark AI/ML: Scikit-learn, OpenAI, Hugging Face, LangChain, Transformers Data: Azure Data Factory, Snowflake, BigQuery, Postgres Integration: FHIR APIs, REST APIs, Postman Visualization: Power BI, Streamlit, Tableau Compliance: HIPAA, De-ID, RBAC
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
SAI Group is a private investment firm dedicated to incubating and scaling AI-powered enterprise software companies. Our portfolio spans across rapidly growing businesses that serve over 2,000 global customers, generating nearly $600 million in annual revenue and employing over 4,000 individuals worldwide. As we continue to scale, we are focused on building world-class teams to drive our portfolio companies" success. As a Research Engineer at SAI Group, you will play a key role in exploring and establishing best practices for leveraging and extending novel Generative AI based technologies to develop groundbreaking solutions for enterprise use cases. This position offers a unique opportunity to delve into the rapidly evolving world of foundation models, AI agents, reasoning models, and more to contribute to the design and development of an innovative enterprise AI platform, delivering transformative, AI-enabled work experiences for enterprise users. Your responsibilities will include: - Experimentation and Research: Systematically exploring various technologies and strategies to enhance understanding and utilization of foundation models and related Generative AI frameworks. - Development and Analysis: Designing experiments, building tools, running experiments, and analyzing results to shape the architecture of the enterprise AGI platform. - Prototyping and Innovation: Developing and refining prototypes for the platform based on the latest advancements and insights gained from research activities. Requirements for this role include: - Proven hands-on experience in prototyping advanced deep technologies in AI/ML. - Hands-on experience with AI foundation models to craft real-world applications. - Comprehensive knowledge of the emerging AI ecosystem, including foundation model ecosystem, agent frameworks, and tools. - Experience with AI Agent frameworks such as LangChain, AutoGen, CrewAI, or similar technologies. - Knowledge and/or experience with technologies like reinforcement learning, ontologies, knowledge graphs, etc. - Ph.D/MS in AI/CS or related fields with a strong focus on AI/ML technologies. - At least 5 years of work experience in an applied AI science or AI engineering role within a science-driven product development context. Joining SAI Group offers you the chance to be part of a pioneering team that is pushing the envelope of AI capabilities to create an autonomous intelligence-driven future. We value bold innovation, continuous learning, and offer a competitive salary, equity options, and an attractive benefits package including health, dental, vision insurance, and flexible working arrangements. Your work will directly contribute to pioneering solutions with the potential to transform industries and redefine how we interact with technology.,
Posted 6 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: · 3+ years of experience in implementing analytical solutions using Palantir Foundry. · · preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. · · Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. · · Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. · · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · · At least 3 years of experience with Foundry services: · · Data Engineering with Contour and Fusion · · Dashboarding, and report development using Quiver (or Reports) · · Application development using Workshop. · · Exposure to Map and Vertex is a plus · · Palantir AIP experience will be a plus · · Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. · · Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. · · Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). · · Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. · · Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. · · Experience in MLOps is a plus. · · Experience in developing and managing scalable architecture & working experience in managing large data sets. · · Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. · · Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. · · A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. · · Experience in developing GenAI application is a plus Mandatory skill sets: · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · At least 3 years of experience with Foundry services Preferred skill sets: Palantir Foundry Years of experience required: Experience 4 to 7 years ( 3 + years relevant) Education qualification: Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 6 days ago
12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About RocketFrog.ai: RocketFrog.ai is an AI Studio for Business, engineering competitive advantage through cutting-edge AI solutions in Healthcare, Pharma, BFSI, Hi-Tech, and Consumer Services. From Agentic AI and deep learning models to full-stack AI-first product development, we help enterprises translate innovation into measurable business impact. 🚀 Ready to take a Rocket Leap with Science? Role Overview: We are looking for a visionary and execution-oriented AI Product Manager with 7–12 years of experience in the software/technology industry. This role will lead the conceptualization, development, and rollout of AI-first products that transform business workflows through Agentic AI, intelligent automation, and enterprise-scale orchestration. As an AI Product Manager, you will define product strategy, shape solution blueprints, and align cross-functional teams around product execution. You will translate ambiguous problem statements into structured AI solutions, bridging business objectives with technical implementation. Key Responsibilities: Own the AI Product Lifecycle: Lead end-to-end product development from ideation to launch, incorporating AI/ML capabilities, business logic, and user feedback. Define Product Vision & Roadmap: Translate customer and market needs into clear product goals, success metrics, and agile execution plans. Design AI-First Workflows: Collaborate with AI engineers and architects to design solutions using intelligent agents, LLM orchestration, and adaptive automation. Drive Stakeholder Alignment: Work closely with CxOs, domain SMEs, and engineering teams to define product requirements and align on priorities. Deliver High-Impact Use Cases: Identify and deliver AI use cases that optimize enterprise operations in BFSI, Healthcare, Pharma, or Consumer Services. Lead Backlog & Feature Planning: Break down large AI initiatives into actionable user stories and manage sprints with engineering and design teams. Champion AI/ML Integration: Identify opportunities to embed LLMs, agentic workflows, and data-driven intelligence into products. Structure Unstructured Information: Bring clarity to complexity using mind maps, schema models, and ontology-driven structures. Manage UX-UXP Alignment: Oversee wireframes, user journeys, and workflow designs via tools like Figma and Miro in collaboration with the design team. Measure Outcomes: Define product KPIs and drive post-launch iteration cycles based on usage, performance, and business feedback. Required Skills & Expertise: Domain & Experience: Software Product Leadership: 7–12 years in software/tech industry with at least 3 years in AI/ML-based product management roles. Strategic Thinking & Execution: Ability to drive both big-picture thinking and detailed execution with cross-functional teams. AI & Data Product Fluency: Agentic AI Concepts: Strong conceptual understanding of Agentic AI and how intelligent agents collaborate to automate business workflows. Agent Orchestration Awareness: Familiarity with how platforms like LangGraph or Crew.ai orchestrate roles such as worker, reviewer, or approver to enable modular and auditable AI behavior. ML Fundamentals: Working understanding of core AI/ML concepts including classification, clustering, and other supervised/unsupervised learning approaches. Cloud AI Services (Desirable): Basic understanding of cloud computing and cloud-based AI platforms such as AWS, Azure, or Google Cloud for deploying intelligent workflows. Meta-Modelling & Knowledge Structures Proficiency in designing schemas, taxonomies, or ontologies; familiarity with RDF, OWL, SHACL, or Description Logic is a strong plus. Information Design & Storytelling Information Structuring: Proven ability to turn raw inputs into actionable workflows and structured schemas using mind maps or process maps. Narrative & Visualization: Strong storytelling through PowerPoint, Notion, dashboards; ability to articulate strategy and progress to executives. Product & Process Tools Tools: Miro, Figma, JIRA, Asana, Confluence, Excel/Google Sheets, PowerPoint, Notion, and basic BI tools (Tableau, Power BI). Process Modeling: Swimlane diagrams, BPMN, ERDs, system design documentation. Key Stakeholders CxOs & Innovation Leaders – Strategic alignment and value realization Engineering & AI Teams – Technical execution and AI enablement Business & Operations Teams – Domain knowledge and feedback integration Customers – Use case validation and value co-creation Qualifications: Bachelor’s or Master’s degree from Tier-1 institutions (IIT, IIM, ISB, IIIT, BITS preferred). 7 to 12 years of professional experience, including product leadership in AI-first or data-driven product lines. Proven success in delivering AI-based transformation, automation, or enterprise software products. Why Join RocketFrog.ai? Shape the future of Agentic AI and intelligent enterprise systems. Own and scale high-impact AI product lines across industry verticals. Collaborate with world-class researchers, AI engineers, and product strategists. Thrive in a flat, fast-paced, innovation-first environment. Drive real business impact with measurable transformation at the intersection of AI and industry.
Posted 1 week ago
5.0 years
4 - 8 Lacs
Gurgaon
On-site
Job Description At Ramboll Tech, we believe innovation thrives in diverse, supportive environments where everyone can contribute their best ideas. As Senior/Lead Machine Learning Engineer, you will step up and take responsibility to create cutting-edge AI solutions that empower our business while mentoring others and fostering a culture of collaboration and growth. As the sparring partner for your product owners and your Chapter lead, your job is to shape the technical roadmap and contribute to the implementation of best practices both in the product team (“Pod”) you work in and the global Chapter of ML Engineers. You will work with the global Chapter leads, subject matter experts, and other ML Engineers to deliver impactful AI solutions. What you will do Technological Leadership: Define architectural patterns for scalable LLM pipelines, ensuring robust versioning, monitoring, and adherence to best practices. Drive the integration of external knowledge bases and retrieval systems to augment LLM capabilities. Research and Development: Effective RAG architectures and technologies for organizing complex domain-specific data (e.g. vector databases, knowledge graphs) and effective knowledge extraction Explore and benchmark state-of-the-art LLMs, tuning, adaptation, and training for performance and cost efficiency. Incorporate recent trends like instruction tuning, RLHF, or LoRA fine-tuning for domain customization. Embed domain-specific ontologies, taxonomies, and style guides into NLP workflows to adapt models to unique business contexts. Evaluation and Optimization: Analyze models for quality, latency, sustainability metrics, and cost, identifying and implementing improvements for better outcomes. Define and own the ML-Ops for your Pod. Experimentation and Continuous Improvement: Develop experiments for model evaluation and improvement, keeping the solutions aligned with evolving industry standards. Best Practices: Establish scalable coding standards and best practices for maintainable and production-ready systems. Team Support: Mentor ML engineers to foster their personal growth. Qualifications How you will succeed in your role We’re looking for someone who is excited to make an impact and grow with us. While not everyone will have all the qualifications listed, you might be a great fit if you bring some of the following. We’re working with every team member individually to grow according to their needs and abilities. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Minimum 5 years of experience implementing machine learning projects. At least 2 years in a senior or lead role. Demonstrated expertise integrating modern LLMs into production systems. Leadership skills: Proven leadership in driving technical projects to successful completion in agile environments Strong communication skills to align technical solutions with business goals. Ability to mentor and foster innovation within the team. Technology Skills: LLM and RAG Expertise: Strong expertise in building Retrieval-Augmented Generation (RAG) architectures and integrating with vector and graph databases. Transformer and LLM Architectures: In-depth experience with modern Transformer-based LLMs (e.g., GPT-4, Claude, Gemini, Llama, Falcon, Mistral) Model Performance and Optimization: Demonstrated ability to fine-tune and optimize LLMs for quality, latency, sustainability and cost-effective performance. Programming and NLP Tooling: Advanced Python proficiency and expertise with frameworks like PyTorch, TensorFlow, Hugging Face, or LangChain. MLOps and Deployment: Experience with containerization tools (e.g. Docker, Kubernetes) and workflow management tools (e.g. Azure ML Studio, MLFlow) Cloud and AI Infrastructure: Hands-on experience with (preferably Azure) Cloud environments for scalable AI deployment, monitoring, and optimization. Document Intelligence: Document Processing and knowledge extraction tools Databases: Experience with relational (SQL), NoSQL databases Data Platforms: Familiarity with platforms like Snowflake or Databricks Additional Information What defines us : Curiosity, Optimism, Ambition, Empathy Our team at Ramboll Tech is currently on a steep growth trajectory while maintaining a strong team culture. We are curious about other people and their motivations; about new business models and technologies; about each other and the future. We are optimistic, focusing on solutions rather than problems; we plan for success and are willing to take calculated risks instead of playing it safe. We are ambitious, setting our own standards higher than others’ expectations, and we celebrate each other's successes. We are empathetic, taking ourselves, others, and each other seriously without prejudgment, and we help each other and our clients, colleagues, and the world. How we work as a team Our team culture is crucial to us; that's why we take time daily to exchange ideas and discuss our work priorities. We support each other when facing challenges and foster a strong team spirit. We aim to learn and grow continuously from one another. We value diversity, and although we're not perfect, we regularly engage in open discussions about how we can improve in this area. Our current hybrid work approach focuses on adapting to different needs, including increased flexibility that works best for teams and individuals, with as much autonomy as possible. We also meet up in person regularly, such as twice a year at our offsite or during team dinners. Who is Ramboll? Ramboll is a global architecture, engineering, and consultancy firm. We believe sustainable change's aim is to create a livable world where people thrive in healthy nature. Our strength is our employees, and our history is rooted in a clear vision of how a responsible company should act. Openness and curiosity are cornerstones of our corporate culture, fostering an inclusive mindset that seeks new, diverse, and innovative perspectives. We respect and welcome all forms of diversity and focus on creating an inclusive environment where everyone can thrive and reach their full potential. And what does Ramboll Tech do? Ramboll Tech accelerates innovation and digital transformation for the entire Ramboll group and directs all AI-related initiatives within the company. This includes collaborating with markets at Ramboll on their AI projects, as well as working on larger change processes within the corporation and developing proprietary AI products for Ramboll and our clients. Our team currently has over 300 employees from Denmark, Germany, the USA, and India. We are looking to quickly expand in key areas across Europe and the globe. Equality, Diversity, and Inclusion Equality, diversity, and inclusion are at the heart of what we do. At Ramboll, we believe that diversity is a strength and that different experiences and perspectives are essential to creating truly sustainable societies. We are committed to providing an inclusive and supportive work environment where everyone is able to flourish and reach their potential. We also know how important it is to achieve the right balance of where, when, and how much you work. At Ramboll, we offer flexibility as part of our positive and inclusive approach to work. We invite applications from candidates of all backgrounds and characteristics. Please let us know if there are any changes, we could make to the application process to make it more comfortable for you. You can contact us at [email protected] with such requests. Important information We do require a cover letter together with your current CV (preferably without a photo) through our application tool, and we're eager to get to know you better in a conversation. Do you have any questions? Feel free to contact the hiring team through our recruitment tool.
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About RocketFrog.ai RocketFrog.ai is an AI Studio for Business, engineering competitive advantage through cutting-edge AI solutions in Healthcare , Pharma , BFSI , Hi-Tech , and Consumer Services . From Agentic AI and deep learning models to full-stack AI-first product development , we help enterprises translate innovation into measurable business impact. 🚀 Ready to take a Rocket Leap with Science? Role Overview We are seeking a proactive and analytical AI Product Analyst with 3–5 years of experience in the software industry to support our AI-led transformation initiatives. You will work closely with business and technology stakeholders to analyze current processes, define solution requirements, and enable the implementation of AI-first products and intelligent automation. The ideal candidate will contribute meaningfully to Agentic AI-based solution design , aligning innovation with business outcomes. Key Responsibilities Study deep functional problems across client domains and conceptualize effective AI/IT solutions in close consultation with internal technical teams. Collaborate directly with client stakeholders to gather, refine, and validate functional and non-functional requirements, ensuring clarity and alignment. Translate business objectives into structured workflows, use cases, and solution blueprints for AI-led transformation. Contribute to the conceptualization and validation of Agentic AI-based automation flows , involving task decomposition, intelligent agents, and adaptive execution. Create workflow diagrams, UI mockups, and system visuals using tools like Figma and Miro to support solution design and user alignment. Liaise between business and engineering teams to ensure smooth execution across the software development lifecycle (SDLC). Support user acceptance testing (UAT), coordinate feedback loops, and enable seamless system adoption. Present findings, solution proposals, and performance insights using clear and compelling narratives in PowerPoint or dashboard formats. Track post-deployment performance and recommend improvements based on KPIs and user feedback. Required Skills & Expertise Software Industry Experience (Mandatory): Proven track record in software/tech-driven environments, with a working understanding of SDLC and Agile methodologies. Information Structuring (Very Important): Proven ability to organize unstructured information into structured, navigable forms like mind maps or decision trees. AI Fluency (Very Important): Exposure to Generative AI concepts, Agentic AI frameworks, or Prompt Engineering. Communication & Storytelling (Very Important): Strong verbal and written communication skills, with proficiency in PowerPoint for narrative design and stakeholder presentations. Meta-Modelling & Schema Design (Desirable): Strong experience creating information schemas, ontologies, and taxonomies to support AI and system design. Familiarity with Description Logic or semantic web standards (OWL, RDF, SHACL) is highly desirable.(Knowledge Representation) Business Process Mapping: Ability to model workflows and identify automation levers in domains like BFSI, Healthcare, or Consumer Services using swimlane diagrams, BPMN standards, etc. Data-Driven Insighting (Important): Ability to extract insights from data and translate them into actionable business recommendations. Tools: Hands-on experience with Miro, Asana, JIRA, and Confluence; familiarity with Excel and basic data visualization tools (a plus). Wireframing & Visualization: Basic proficiency in Figma for crafting UI sketches and visualizing process flows. Key Stakeholders CxOs & Business Leaders – Strategic decision-makers and data consumers Process Owners – Operational stakeholders and subject matter experts Technology Teams – Engineering, AI, and product delivery teams Qualifications Bachelor’s or Master’s degree from IIT or IIM, or other Tier-1 institutions. 3 to 5 years of experience as a Business Analyst, Product Analyst, or Product Manager in the software industry Proven success in driving process optimization, automation, or AI-led transformation initiatives Immediate joiners preferred Why Join RocketFrog.ai? Be part of a mission-driven AI company redefining enterprise transformation Work on real, high-impact AI projects across multiple industries Contribute to next-gen innovations in Agentic AI and intelligent automation Collaborate with world-class AI scientists, engineers, and domain experts Enjoy autonomy, visibility, and rapid career growth in a flat and fast-moving team Immerse yourself in a culture that values continuous learning and curiosity in the evolving tech space
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
The Senior Semantic Modeler will be responsible for designing, developing, and maintaining Semantic models using platforms like CubeDev, HoneyDew, AtScale, and others. This role requires a deep understanding of Semantic modeling principles and practices. You will work closely with data architects, data engineers, and business stakeholders to ensure the accurate and efficient representation of data for Generative AI and Business Intelligence purposes. Experience with graph-based semantic models is a plus. As a Product Architect - Semantic Modelling, your key responsibilities will include: - Designing and developing Semantic data models using platforms such as CubeDev, HoneyDew, AtScale, etc. - Creating and maintaining Semantic layers that accurately represent business concepts and support complex querying and reporting. - Collaborating with stakeholders to understand data requirements and translating them into semantic models. - Integrating semantic models with existing Gen AI & BI infrastructure alongside data architects and engineers. - Ensuring the alignment of semantic models with business needs and data governance policies. - Defining key business metrics within the semantic models for consistent and accurate reporting. - Identifying and documenting metric definitions in collaboration with business stakeholders. - Implementing processes for metric validation and verification to ensure accuracy and reliability. - Monitoring and maintaining the performance of metrics within the Semantic models and addressing any issues promptly. - Developing efficient queries and scripts for data retrieval and analysis. - Conducting regular reviews and updates of semantic models to ensure their effectiveness. - Providing guidance and expertise on Semantic technologies and best practices to the development team. - Performing data quality assessments and implementing improvements for data integrity and consistency. - Staying up to date with the latest trends in Semantic technologies and incorporating relevant innovations into the modeling process. - Secondary responsibilities may include designing and developing graph-based semantic models using RDF, OWL, and other semantic web standards. - Creating and maintaining ontologies that accurately represent domain knowledge and business concepts. Requirements: - Bachelor's or Masters degree in Computer Science, Information Systems, Data Science, or a related field. - Minimum of 6+ years of experience in Semantic modeling, data modeling, or related roles. - Proficiency in Semantic modeling platforms such as CubeDev, HoneyDew, AtScale, etc. - Strong understanding of data integration and ETL processes. - Familiarity with data governance and data quality principles. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Ability to work independently and as part of a team. - Experience with graph-based semantic modeling tools such as Protg, Jena, or similar is a plus. Functional skills: - Experience in Lifesciences commercial analytics industry is preferred with familiarity in industry-specific data standards. - Knowledge of Gen AI overview and frameworks would be a plus. - Certification in BI semantic modeling or related technologies. Trinity is a life science consulting firm, founded in 1996, committed to providing evidence-based solutions for life science corporations globally. With over 25 years of experience, Trinity is dedicated to solving clients" most challenging problems through exceptional service, powerful tools, and data-driven insights. Trinity has 12 offices globally, serving 270+ life sciences customers with 1200+ employees. The India office was established in 2017 and currently has around 350+ employees, with plans for exponential growth. Qualifications: B.E Graduates are preferred.,
Posted 1 week ago
40.0 years
4 - 8 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216753 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jul. 22, 2025 CATEGORY: Engineering [Role Name : IS Architecture] Job Posting Title: Data Architect Workday Job Profile : Principal IS Architect Department Name: Digital, Technology & Innovation Role GCF: 06A ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Architect is a senior-level position responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Architect drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. This role will manage a team of Data Modelers. Roles & Responsibilities: Provide oversight to data modeling team members. Develop and maintain conceptual logical, and physical data models and to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Evaluate emerging technologies and assess their potential impact. Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: [GCF Level 6A] Doctorate Degree and 8 years of experience in Computer Science, IT or related field OR Master’s degree with 12 - 15 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 14 - 17 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills : Data Modeling: Expert in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Information Governance: Familiarity with policies and procedures for managing information assets, including security, privacy, and compliance. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
4.0 years
0 Lacs
India
Remote
📍 Remote | Junior Roles | <4 years experience | Industry-Specific | Full-Time 📅 Applications close: August 20th, 2025 About the Role We’re hiring Vertical Intelligence Specialists to drive innovation at the intersection of AI, 3D engineering, and compliance for heavy industries. You will serve as the strategic brain of your vertical ( Automotive / Construction / Railway / Naval / Aviation / Aerospace ) - turning complex industry knowledge into actionable product features, AI data signals, and go-to-market advantages. This role blends business insight, systems thinking, and cross-functional collaboration with our AI, engineering, and sales teams. You’ll act as a command center for your vertical, identifying client value, guiding technical teams, and fueling our platform’s intelligence. About Neovoltis Neovoltis is building the world’s first AI-native, modular engineering ecosystem for industrial sectors - combining 3D design, simulation, ontology-based AI, and multi-layered intelligence. We operate under a neural company model : modular cells, quarterly reviews, self-evolving teams. You’ll be embedded in a Vertical Cell , collaborating with the Intelligence Cell, Platform Engineering Cell, and Sales Cell — shaping the future of industrial AI, one vertical at a time. What You’ll Do 🔧 Be the voice of the vertical inside Neovoltis: • Identify and prioritize client-facing needs in your sector • Translate business realities into feature specs, simulation logic, and 3D asset needs • Lead quarterly reviews to steer your cell’s roadmap • Represent domain-specific norms, risks, and opportunities 🤝 Bridge strategy with execution: • Support the AI team with labeled data, business rules, and compliance metadata • Co-design ontologies and data flows that reflect your sector’s real-world complexity • Collaborate with sales & marketing to ensure messaging reflects technical realities and industry value • Validate outputs from the platform team to ensure product-market fit 🎯 Drive customer insight and ecosystem logic: • Engage with early users and customers from your sector • Track standards, compliance frameworks, and trends (e.g., ISO, Eurocode, UNECE, ERA, etc.) • Maintain domain-specific knowledge libraries (docs, specs, white papers, playbooks) Ideal Profile ✅ 1 to 4 years of experience in automotive , construction , or railway engineering, simulation, product, or compliance ✅ Strong ability to translate between business , data , and technical stakeholders ✅ Excellent understanding of your vertical’s standards, processes, and pain points ✅ Autonomous, structured thinker with curiosity for AI and digital twins ✅ Previous collaboration with product or engineering teams is a plus ✅ Experience in simulation, CAD/BIM, regulatory compliance, or ERP/MES systems is a plus Tools You Might Work With • Notion, Gemini, Google Chat (Cell coordination) • Docs + Sheets + Diagrams (Feature specs, compliance flows) • 3D formats (STL, STEP, IFC) and workflows • Industry standards: ISO, UNECE, ERA, EC codes, etc. • Ontology & knowledge modeling tools (training provided) • Internal GPT/LLM tools and dashboards for client behavior and trend detection What You’ll Influence ✅ Shape the roadmap of Neovoltis for your entire industry ✅ Inform how AI models understand, process, and simulate real-world behavior ✅ Define business features and compliance intelligence pipelines ✅ Contribute to the go-to-market strategy for your vertical ✅ Be the neuron of your industry in a Nobel-worthy neural company Compensation & Benefits 💰 Experience Based 🎯 Performance-based bonus 🌍 Work remotely in a fully international, AI-first company 🚀 Join a pioneering neural organization model with real strategic ownership 📚 Access to training and conferences in your industry or tech sector 🧠 Long-term career growth in a fast-evolving company at the frontier of industry and AI How to Apply 📧 Send your CV, a short motivation note, and your industry of interest to: careers@neovoltis.com 📅 Application deadline: August 20, 2025 🌐 Please indicate if you’re applying for: Automotive , Construction , Railway, Naval, Aviation or Aerospace
Posted 1 week ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Modeler position is responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Modeler drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. Roles & Responsibilities: Develop and maintain conceptual logical, and physical data models and to support business needs Contribute to and Enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: Doctorate / Master’s / Bachelor’s degree with 8-12 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Data Modeling: Proficiency in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management: Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Implementing Data testing and data quality strategies. Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications (please mention if the certification is preferred or mandatory for the role): Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Python Developer – Web Scraping & Data Processing About the Role We are seeking a skilled and detail-oriented Python Developer with hands-on experience in web scraping, document parsing (PDF, HTML, XML), and structured data extraction. You will be part of a core team working on aggregating biomedical content from diverse sources, including grant repositories, scientific journals, conference abstracts, treatment guidelines, and clinical trial databases. Key Responsibilities • Develop scalable Python scripts to scrape and parse biomedical data from websites, pre-print servers, citation indexes, journals, and treatment guidelines. • Build robust modules for splitting multi-record documents (PDFs, HTML, etc.) into individual content units. • Implement NLP-based field extraction pipelines using libraries like spaCy, NLTK, or regex for metadata tagging. • Design and automate workflows using schedulers like cron, Celery, or Apache Airflow for periodic scraping and updates. • Store parsed data in relational (PostgreSQL) or NoSQL (MongoDB) databases with efficient schema design. • Ensure robust logging, exception handling, and content quality validation across all processes. Required Skills and Qualifications • 3+ years of hands-on experience in Python, especially for data extraction, transformation, and loading (ETL). o Strong command over web scraping libraries: BeautifulSoup, Scrapy, Selenium, Playwright o Proficiency in PDF parsing libraries: PyMuPDF, pdfminer.six, PDFPlumber • Experience with HTML/XML parsers: lxml, XPath, html5lib • Familiarity with regular expressions, NLP, and field extraction techniques. • Working knowledge of SQL and/or NoSQL databases (MySQL, PostgreSQL, MongoDB). • Understanding of API integration (RESTful APIs) for structured data sources. • Experience with task schedulers and workflow orchestrators (cron, Airflow, Celery). • Version control using Git/GitHub and comfortable working in collaborative environments. Good to Have • Exposure to biomedical or healthcare data parsing (e.g., abstracts, clinical trials, drug labels). • Familiarity with cloud environments like AWS (Lambda, S3) • Experience with data validation frameworks and building QA rules. • Understanding of ontologies and taxonomies (e.g., UMLS, MeSH) for content tagging. Why Join Us • Opportunity to work on cutting-edge biomedical data aggregation for large-scale AI and knowledge graph initiatives. • Collaborative environment with a mission to improve access and insights from scientific literature. • Flexible work arrangements and access to industry-grade tools and infrastructure.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough