Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About The Role : Job Title- Data Science Engineer, AS Location- Bangalore, India Role Description We are seeking a Data Science Engineer to contribute to the development of intelligent, autonomous AI systems The ideal candidate will have a strong background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves deploying AI solutions that leverage Retrieval-Augmented Generation (RAG) Multi-agent frameworks Hybrid search techniques to enhance enterprise applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 4+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. How well support you
Posted 4 days ago
7.0 - 12.0 years
32 - 37 Lacs
Bengaluru
Work from Office
About The Role : Job Title- Data Science Engineer, VP Location- Bangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 13+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 4 days ago
3.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Modeler, you will be responsible for understanding business requirements, data mappings, create and maintain data models through different stages using data modeling tools and handing over the physical design/DDL scripts to the data engineers for implementations of the data models. Your role involves creating and maintaining data models, ensuring performance and quality or deliverables.Experience- Overall IT experience (No of years) - 7+- Data Modeling Experience - 3+Key Responsibilities:- Drive discussions with clients teams to understand business requirements, develop Data Models that fits in the requirements- Drive Discovery activities and design workshops with client and support design discussions- Create Data Modeling deliverables and getting sign-off - Develop the solution blueprint and scoping, do estimation for delivery project. Technical Experience:Must Have Skills: - 7+ year overall IT experience with 3+ years in Data Modeling- Data modeling experience in Dimensional Modeling/3-NF modeling/No-SQL DB modeling- Should have experience on at least one Cloud DB Design work- Conversant with Modern Data Platform- Work Experience on data transformation and analytic projects, Understanding of DWH.- Instrumental in DB design through all stages of Data Modeling- Experience in at least one leading Data Modeling Tools e.g. Erwin, ER Studio or equivalentGood to Have Skills: - Any of these add-on skills - Data Vault Modeling, Graph Database Modelling, RDF, Document DB Modeling, Ontology, Semantic Data Modeling- Preferred understanding of Data Analytics on Cloud landscape and Data Lake design knowledge.- Cloud Data Engineering, Cloud Data Integration- Must be familiar with Data Architecture Principles.Professional Experience:- Strong requirement analysis and technical solutioning skill in Data and Analytics - Excellent writing, communication and presentation skills.- Eagerness to learn new skills and develop self on an ongoing basis.- Good client facing and interpersonal skills Educational Qualification:- B.E. or B.Tech. must Qualification 15 years full time education
Posted 4 days ago
3.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives.Experience:- Overall IT experience (No of years) - 7+- Data Modeling Experience - 3+- Data Vault Modeling Experience - 2+ Key Responsibilities:- Drive discussions with clients deal teams to understand business requirements, how Data Model fits in implementation and solutioning- Develop the solution blueprint and scoping, estimation in delivery project and solutioning- Drive Discovery activities and design workshops with client and lead strategic road mapping and operating model design discussions- Design and develop Data Vault 2.0-compliant models, including Hubs, Links, and Satellites.- Design and develop Raw Data Vault and Business Data Vault.- Translate business requirements into conceptual, logical, and physical data models.- Work with source system analysts to understand data structures and lineage.- Ensure conformance to data modeling standards and best practices.- Collaborate with ETL/ELT developers to implement data models in a modern data warehouse environment (e.g., Snowflake, Databricks, Redshift, BigQuery).- Document models, data definitions, and metadata. Technical Experience:Good to have Skills: - 7+ year overall IT experience, 3+ years in Data Modeling and 2+ years in Data Vault Modeling- Design and development of Raw Data Vault and Business Data Vault.- Strong understanding of Data Vault 2.0 methodology, including business keys, record tracking, and historical tracking.- Data modeling experience in Dimensional Modeling/3-NF modeling- Hands-on experience with any data modeling tools (e.g., ER/Studio, ERwin, or similar).- Solid understanding of ETL/ELT processes, data integration, and warehousing concepts.- Experience with any modern cloud data platforms (e.g., Snowflake, Databricks, Azure Synapse, AWS Redshift, or Google BigQuery).- Excellent SQL skills. Good to Have Skills: - Any one of these add-on skills - Graph Database Modelling, RDF, Document DB Modeling, Ontology, Semantic Data Modeling- Hands-on experience in any Data Vault automation tool (e.g., VaultSpeed, WhereScape, biGENIUS-X, dbt, or similar).- Preferred understanding of Data Analytics on Cloud landscape and Data Lake design knowledge.- Cloud Data Engineering, Cloud Data Integration Professional Experience:- Strong requirement analysis and technical solutioning skill in Data and Analytics - Excellent writing, communication and presentation skills.- Eagerness to learn new skills and develop self on an ongoing basis.- Good client facing and interpersonal skills Educational Qualification:- B.E or B.Tech must Qualification 15 years full time education
Posted 4 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
The Senior Semantic Modeler will be responsible for designing, developing, and maintaining Semantic models using platforms like CubeDev, HoneyDew, AtScale, and others. This role requires a deep understanding of Semantic modeling principles and practices. You will work closely with data architects, data engineers, and business stakeholders to ensure the accurate and efficient representation of data for Generative AI and Business Intelligence purposes. Experience with graph-based semantic models is a plus. As a Product Architect - Semantic Modelling, your key responsibilities will include: - Designing and developing Semantic data models using platforms such as CubeDev, HoneyDew, AtScale, etc. - Creating and maintaining Semantic layers that accurately represent business concepts and support complex querying and reporting. - Collaborating with stakeholders to understand data requirements and translating them into semantic models. - Integrating semantic models with existing Gen AI & BI infrastructure alongside data architects and engineers. - Ensuring the alignment of semantic models with business needs and data governance policies. - Defining key business metrics within the semantic models for consistent and accurate reporting. - Identifying and documenting metric definitions in collaboration with business stakeholders. - Implementing processes for metric validation and verification to ensure accuracy and reliability. - Monitoring and maintaining the performance of metrics within the Semantic models and addressing any issues promptly. - Developing efficient queries and scripts for data retrieval and analysis. - Conducting regular reviews and updates of semantic models to ensure their effectiveness. - Providing guidance and expertise on Semantic technologies and best practices to the development team. - Performing data quality assessments and implementing improvements for data integrity and consistency. - Staying up to date with the latest trends in Semantic technologies and incorporating relevant innovations into the modeling process. - Secondary responsibilities may include designing and developing graph-based semantic models using RDF, OWL, and other semantic web standards. - Creating and maintaining ontologies that accurately represent domain knowledge and business concepts. Requirements: - Bachelor's or Masters degree in Computer Science, Information Systems, Data Science, or a related field. - Minimum of 6+ years of experience in Semantic modeling, data modeling, or related roles. - Proficiency in Semantic modeling platforms such as CubeDev, HoneyDew, AtScale, etc. - Strong understanding of data integration and ETL processes. - Familiarity with data governance and data quality principles. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Ability to work independently and as part of a team. - Experience with graph-based semantic modeling tools such as Protg, Jena, or similar is a plus. Functional skills: - Experience in Lifesciences commercial analytics industry is preferred with familiarity in industry-specific data standards. - Knowledge of Gen AI overview and frameworks would be a plus. - Certification in BI semantic modeling or related technologies. Trinity is a life science consulting firm, founded in 1996, committed to providing evidence-based solutions for life science corporations globally. With over 25 years of experience, Trinity is dedicated to solving clients" most challenging problems through exceptional service, powerful tools, and data-driven insights. Trinity has 12 offices globally, serving 270+ life sciences customers with 1200+ employees. The India office was established in 2017 and currently has around 350+ employees, with plans for exponential growth. Qualifications: B.E Graduates are preferred.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You are a highly skilled and motivated Senior Content Taxonomy Analyst who will be responsible for designing, developing, and maintaining ontologies and taxonomies that drive the content strategy and underpin the content management platform. Your role is crucial in ensuring that information is organized, accessible, and effectively utilized across the organization. As a Senior Content Taxonomy Analyst, you will collaborate with subject matter experts to gather requirements, define scope, and ensure the accuracy and completeness of knowledge models. You will apply semantic web standards and technologies such as RDF, OWL, and SKOS to build robust and scalable ontologies. You will develop and implement a comprehensive content taxonomy strategy that aligns with business goals and user needs. This includes defining content types, metadata schemas, and controlled vocabularies to support content creation, management, and retrieval. Establishing guidelines and best practices for content tagging and classification will also be part of your responsibilities. Working closely with the content management platform team, you will integrate ontologies and taxonomies into the platform's architecture and functionality. You will ensure that the platform effectively supports semantic search, faceted navigation, and other knowledge-driven features. Additionally, you will provide guidance and support to content authors and editors on the use of taxonomies and metadata. Your role will also involve contributing to the development of data governance policies and standards related to ontology and taxonomy management. You will ensure compliance with industry best practices and relevant standards while promoting the use of ontologies and taxonomies across the organization. Collaboration with cross-functional teams, including content creators, developers, data scientists, and business stakeholders, will be essential. You will need to communicate complex technical concepts clearly and effectively to both technical and non-technical audiences. Staying up-to-date on the latest trends and technologies in ontology and knowledge representation and participating in industry communities will also be part of your role. In addition to a Bachelor's degree in Computer Science, Information Science, Library Science, Linguistics, or a related field, a Master's degree is preferred. You should have 3+ years of experience in ontology development, taxonomy design, or knowledge representation. Strong understanding of semantic web technologies, experience with ontology editing tools, content management systems, and excellent analytical, problem-solving, and communication skills are necessary. Ability to work independently and collaboratively in a fast-paced environment is also required. If you have experience with machine learning, natural language processing techniques, knowledge graph technologies, data governance principles, ontology editing tools, or industry domains like Architecture, Engineering & Construction, Manufacturing, Media & Entertainment, Product Design & Development, it would be considered a plus. Join us at Autodesk, where amazing things are created every day with our software. Be your whole, authentic self and do meaningful work that helps build a better future for all. Shape the world and your future with us!,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
We help the world run better by enabling individuals to bring out their best at SAP. Our company culture is centered around collaboration and a shared passion for improving the world's operations. We strive each day to lay the groundwork for the future, fostering a workplace that celebrates diversity, values flexibility, and is dedicated to purpose-driven and forward-thinking work. Within our highly collaborative and supportive team environment, we prioritize learning and development, recognize individual contributions, and offer a range of benefit options for our employees. The SAP HANA Database and Analytics Core engine team is currently seeking an intermediate or senior developer to join our Knowledge Graph Database System engine development efforts. In this role, you will be responsible for designing, developing features, and maintaining our Knowledge Graph engine, which operates within the SAP HANA in-memory database. At SAP, all members of our engineering team, including management, are hands-on and deeply involved in the coding process. If you believe you can thrive in such an environment and possess the required skills and experience, we encourage you to apply without hesitation. As a developer on our team, you will have the opportunity to contribute to: The team you will be working with is responsible for the development of the HANA Knowledge Graph, a high-performance graph analytics database system that is accessible to SAP customers, partners, and internal groups as part of the HANA Multi Model Database System. This system is designed to handle large-scale graph data processing and execute complex graph queries with exceptional efficiency. By leveraging massive parallel processing (MPP) architecture and adhering to W3C web standards specifications for graph data and query language RDF and SPARQL, the HANA Knowledge Graph enables organizations to extract insights from their graph datasets, identify patterns, conduct advanced graph analytics, and unlock the value of interconnected data. Key components of the HANA Knowledge Graph System include but are not limited to Storage, Data Load, Query Parsing, Query Planning and Optimization, Query Execution, Transaction Management, Memory Management, Network Communications, System Management, Data Persistence, Backup & Restore, Performance Tuning, and more. This system is poised to play a crucial role in the development of various AI products at SAP. At SAP, we believe in fostering a culture of inclusion, prioritizing the health and well-being of our employees, and offering flexible working models to ensure that everyone, regardless of background, feels valued and can perform at their best. We are dedicated to leveraging the unique capabilities and qualities that each individual brings to our company, investing in our employees" growth, and creating a more equitable world. SAP is an equal opportunity workplace and an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you require accommodation or special assistance during the application process, please reach out to our Recruiting Operations Team at Careers@sap.com. For SAP employees, only permanent roles are eligible for the SAP Employee Referral Program, subject to the eligibility rules outlined in the SAP Referral Policy. Specific conditions may apply to roles in Vocational Training. Successful candidates may be subject to a background verification conducted by an external vendor. Requisition ID: 396628 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.,
Posted 1 week ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
: Job Title- Data Science Engineer, AVP Location- Bangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 8+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 weeks ago
5.0 - 9.0 years
10 - 16 Lacs
Pune, Greater Noida, Delhi / NCR
Work from Office
Responsibilities: Create and optimize complex SPARQL Protocol and RDF Query Language queries to retrieve and analyze data from graph databases Develop graph-based applications and models to solve real-world problems and extract valuable insights from data. Design, develop, and maintain scalable data pipelines using Python rest apis get data from different cloud platforms Create and optimize complex SPARQL queries to retrieve and analyze data from graph databases. Study and understand the nodes, edges, and properties in graphs, to represent and store data in relational databases. Write clean, efficient, and well-documented code. Troubleshoot and fix bugs. Collaborate with other developers and stakeholders to deliver high-quality solutions. Stay up-to-date with the latest technologies and trends. Qualifications: Strong proficiency in SparQL, and RDF query language Strong proficiency in Python and Rest APIs Experience with database technologies sql and sparql Strong problem-solving and debugging skills. Ability to work independently and as part of a team. Preferred Skills: Knowledge of cloud platforms like AWS, Azure, or GCP. Experience with version control systems like Github. Understanding of environments and deployment processes and cloud infrastructure. Share your resume at Aarushi.Shukla@coforge.com if you are an early or immediate joiner.
Posted 3 weeks ago
15.0 - 20.0 years
37 - 45 Lacs
Bengaluru
Work from Office
: Job TitleSenior Data Science Engineer Lead LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, Vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring Your skills and experience 15+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or ecommerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously How well support you
Posted 3 weeks ago
4.0 - 9.0 years
10 - 19 Lacs
Pune, Greater Noida, Delhi / NCR
Work from Office
Responsibilities: Create and optimize complex SPARQL Protocol and RDF Query Language queries to retrieve and analyze data from graph databases Develop graph-based applications and models to solve real-world problems and extract valuable insights from data. Design, develop, and maintain scalable data pipelines using Python rest apis get data from different cloud platforms Create and optimize complex SPARQL queries to retrieve and analyze data from graph databases. Study and understand the nodes, edges, and properties in graphs, to represent and store data in relational databases. Mandatory Skills- Python, RDF, Neo4J, GraphDB, Version Control System, API Frameworks Qualifications: Strong proficiency in SparQL, and RDF query language Strong proficiency in Python and Rest APIs Experience with database technologies sql and sparql Preferred Skills: Knowledge of cloud platforms like AWS, Azure, or GCP. Experience with version control systems like Github. Understanding of environments and deployment processes and cloud infrastructure.
Posted 3 weeks ago
12.0 - 20.0 years
20 - 35 Lacs
Hyderabad
Hybrid
Job Title: Oracle EBS & Oracle Cloud Technical Consultant 12+ Years Experience Job Summary: We are looking for a highly experienced Oracle Technical Consultant with 12+ years of experience in Oracle E-Business Suite (EBS) and Oracle Cloud Applications (Fusion) . The ideal candidate will have deep hands-on experience in technical implementations, reporting tools, APIs, and end-to-end project delivery across Financial modules. Key Responsibilities: Lead and execute end-to-end technical implementation projects for Oracle EBS and Oracle Cloud Applications. Develop and customize reports using BI Publisher (BIP) , OTBI , FTBI , RTF , and RDF . Work across Finance modules (GL, AP, AR, PO), building robust solutions to meet business needs. Design and manage Oracle APIs , especially for AR and outbound data conversions. Collaborate with cross-functional teams and stakeholders to gather requirements and deliver quality solutions. Ensure timely project communication, documentation, and status updates to all stakeholders. Manage and mentor a diversified team across onsite-offshore locations. Participate in code reviews, technical design sessions, and testing cycles including UAT. Build reusable utilities and maintain best practices across development and deployment pipelines. Must-Have Skills: Strong technical expertise in Oracle EBS (R12/11i) and Oracle Fusion Applications . Proficient in: BI Publisher , OTBI , FTBI , RDF/RTF Oracle APIs & Outbound Conversions SQL / PL-SQL programming Data Migration via FBDI, Web ADI Good understanding of EDI , data validation, and document formatting. Excellent communication and client-facing presentation skills . Strong experience with tools: MS Projects , Excel , PowerPoint , Jira , ServiceNow , Zoho . Proven leadership in managing global technical teams . Good to Have: Experience with Agile/DevOps environments Certifications in Oracle Cloud or EBS Working knowledge of cloud integration tools or middleware Location: Hyderabad-Hybrid Job Type: Full-time / Permanent Salary: Competitive, based on experience Notice Period: Immediate to 30 days preferred Why Join Us? Work on large-scale, high-impact Oracle projects Collaborative work culture with growth opportunities Exposure to modern Oracle Cloud technologies and tools Apply Now : Share your updated resume at ganesh.bandaru@cesltd.com
Posted 1 month ago
6.0 - 11.0 years
40 - 45 Lacs
Pune
Work from Office
About the Team Cash Management Payment Orchestration : Cash Management Payment Orchestration has an end-to-end responsibility for application development and management of the respective application portfolio. The portfolio covers strategic payment processing build out and Core Products that Corporate Bank offers to its international clients like DDA/Cash Accounts, Core Banking, Payments Processing and Clearing globally. It is also the global cash settlement platform for all other business lines Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. Your key responsibilities What Youll Do As part of our global team you will work on various components as a Software Engineer. The Engineer will be responsible for the DB aspects of the Technical Application framework that supports our HPE NonStop-based application db-Internet. We are building an excellent Technical Team to support our critical application, enhancing its current capabilities, and looking to create opportunities beyond this to progress into the more modern aspects of our application. Product update and support of Automation and Monitoring tools such as Reflex and Multibatch. Enhance monitoring capability to cover more aspects of our applications. Ongoing evolution of our Disaster Recovery strategy, planning and supporting tools. Support and development of our Automated Test System Build process. TACL coding and testing of routines to support our application. Upgrade activities such as MQ Series, Operating System, and Hardware Upgrades. Performance and capacity managements aspects of our application. Understanding of Network segregation and firewalling (ACR) . General TCP/IP configuration and encryption. Update and adherence to db-Internet Security Controls. Collaborate with teams and individuals across the applications to accomplish common goals. Work with the team on non-functional requirements, technical analysis and design. Your skills and experience Skills Youll Need Good level of experience in the Technical Management of HPE NonStop and/or application Atlas Global Banking/db-Internet. Good working knowledge of HPE NonStop Products and Utilities such as FUP, SQL, ENFORM, TACL, TMF/RDF, SCF and Safeguard. Good working knowledge of OSS and Utilities and directory structures including an understanding of our internal middleware called Ibus Bridge, its configuration and setup. Any knowledge of Java would be advantageous for the future. Proven ability to effectively assess and mitigate project risks and dependencies. Experienced in effectively communicating with and positively influencing project stakeholders and team members.
Posted 1 month ago
7.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Oracle EBS Payroll Techno-Functional professional with 7 to 10 years of experience. The ideal candidate will have expertise in Oracle payroll, preferably with knowledge of India legislation. Roles and Responsibility Implement and manage Oracle payroll systems, including setup, configuration, and maintenance. Provide technical support and troubleshoot issues related to Oracle payroll. Collaborate with cross-functional teams to ensure seamless integration between Oracle payroll and other applications. Develop and maintain documentation of Oracle payroll processes and procedures. Ensure compliance with industry standards and best practices for Oracle payroll management. Analyze and resolve complex technical issues related to Oracle payroll. Job Strong understanding of Oracle HRMS/Payroll, including techno-functional aspects. Excellent written and verbal communication skills, with the ability to work effectively with stakeholders. Experience with Oracle Alert, FND User, Responsibilities, Request Set, value sets, and profile options. Expertise in PL SQL, Package, DB view, Triggers, and other relevant technologies. Ability to work on support activities, including managing Oracle payroll Patches and Version upgrades. Experience with interfaces between applications, such as Oracle Payroll to Others. Btech/MCA in Technology (any stream). Solid techno-functional background in Oracle HRMS/Payroll, with hands-on experience in Oracle Payroll. Technical Skills: BG setup, Payroll Setup, Fast formula, Formula Function (Creation/Debug), Payroll Run, Retro Concept, Pre-Payments, Balances, Element setup, Element Link, Assignment SET, Element Set, EIT, SIT, Costing, Balancing concept between Oracle Payroll to Finance, Security Profiles, KFF, DFF, Request Set, value sets, profile options. Experience with concurrent programs like PL SQL, SQL loader, Host, Java based, SQL file, HRMS and Payroll back-end API, Integration between two modules like HRMS to Payroll, Payroll to Finance, WEBADI, Hooks, AME Setup, Form personalization, Form customization, XML publisher, Oracle report Builder RDF, SQL Loader, UTL file, and Oracle Patching.
Posted 1 month ago
6.0 - 8.0 years
11 - 15 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Employment type: Freelance, Project Based What this is about: At e2f, we offer an array of remote opportunities to work on compelling projects aimed at enhancing AI capabilities As a significant team member, you will help shape the future of AI-driven solutions We value your skills and domain expertise, offering competitive compensation and flexible working arrangements We are looking for an experienced Data Analyst with a strong background in SPARQL for a project-based position The ideal candidate will be responsible for writing, reviewing, and optimizing the queries to extract valuable insights from our knowledge base Qualifications: Bachelor's degree in Computer Science, Data Science, or a related field Proven experience with SPARQL Familiarity with Cypher query languages Expertise in Knowledge Graphs Strong analytical and problem-solving skills Excellent communication and collaboration skills Ability to prioritize and manage workload efficiently Understanding of and adherence to project guidelines and policies Responsibilities: You can commit a minimum of 4 hours per day Flexible schedule (You can split your hours as you prefer) Participate in a training meeting Adhere to deadlines and guideline standards What We Offer: Engage in exciting generative AI development from the convenience of your home Enjoy flexible work hours and availability If you're interested: Apply to our job advertisement We'll review your profile and, if it aligns with our search, we will contact you as soon as possible to share rates and further details About Us: e2f is dedicated to facilitating natural communication between people and machines across languages and cultures With expertise in data science, we provide top-tier linguistic datasets for AI and NLP projects Know more here: www e2f com
Posted 1 month ago
3.0 - 5.0 years
13 - 17 Lacs
Mumbai
Work from Office
At Siemens Energy, we can. Our technology is key, but our people make the difference. Brilliant minds innovate. They connect, create, and keep us on track towards changing the worlds energy systems. Their spirit fuels our mission. Software Developer - Data Integration Platform- Mumbai or Pune , Siemens Energy, Full Time Looking for challenging roleIf you really want to make a difference - make it with us We make real what matters. About the role Technical Skills (Mandatory) Python (Data Ingestion Pipelines) Proficiency in building and maintaining data ingestion pipelines using Python. Blazegraph Experience with Blazegraph technology. Neptune Familiarity with Amazon Neptune, a fully managed graph database service. Knowledge Graph (RDF, Triple) Understanding of RDF (Resource Description Framework) and Triple stores for knowledge graph management. AWS Environment (S3) Experience working with AWS services, particularly S3 for storage solutions. GIT Proficiency in using Git for version control. Optional and good to have skills Azure DevOps (Optional)Experience with Azure DevOps for CI/CD pipelines and project management (optional but preferred). Metaphactory by Metaphacts (Very Optional)Familiarity with Metaphactory, a platform for knowledge graph management (very optional). LLM / Machine Learning ExperienceExperience with Large Language Models (LLM) and machine learning techniques. Big Data Solutions (Optional)Experience with big data solutions is a plus. SnapLogic / Alteryx / ETL Know-How (Optional)Familiarity with ETL tools like SnapLogic or Alteryx is optional but beneficial. We dont need superheroes, just super minds. A degree in Computer Science, Engineering, or a related field is preferred. Professional Software DevelopmentDemonstrated experience in professional software development practices. Years of Experience3-5 years of relevant experience in software development and related technologies. Soft Skills Strong problem-solving skills. Excellent communication and teamwork abilities. Ability to work in a fast-paced and dynamic environment. Strong attention to detail and commitment to quality. Fluent in English (spoken and written) Weve got quite a lot to offer. How about you This role is based in Pune or Mumbai , where youll get the chance to work with teams impacting entire cities, countries- and the shape of things to come. Were Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and imagination and help us shape tomorrow. Find out more about Siemens careers at
Posted 1 month ago
10.0 - 15.0 years
35 - 40 Lacs
Bengaluru
Work from Office
: Job Title Data Science_AI Engineer , AVP LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 10+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. For internal use only Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e- commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously. How well support you
Posted 1 month ago
1.0 - 5.0 years
1 - 4 Lacs
Kolkata
Work from Office
Roles and Responsibilities : PL/SQL Query Tuner Optimise SQL to execute fast & Reduce Cost Have capabilities to write proficient query Worked on RDF Experience on XML Publisher Benefits: Salary + Annual Bonus + Medical Insurance (Coverage between 3 to 5 Lakhs for the employee, spouse & their children) MCC provides insurance benefits like Coverage of pre-existing diseases and all day care procedures Facility of inclusion of family members like spouse and children in the policy Desired Candidate Profile: Hands on experience in following skills: PL/SQL Query Tuner RDF XML Publisher
Posted 1 month ago
4.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Were building the technological foundation for our companys Semantic Layera common data language powered by Anzo / Altair Graph Studio. As a Senior Software Engineer, youll play a critical role in setting up and managing this platform on AWS EKS, enabling scalable, secure, and governed access to knowledge graphs, parallel processing engines, and ontologies across multiple domains, including highly sensitive ones like clinical trials. Youll help design and implement a multi-tenant, cost-aware, access-controlled infrastructure that supports internal data product teams in securely building and using connected knowledge graphs. Key Responsibilities Implement a Semantic Layer for on Anzo / Altair Graph Studio and Anzo Graph Lakehouse in a Kubernetes or ECS environment (EKS / ECS) Develop and manage Infrastructure as Code using Terraform and configuration management via Ansible Integrate platform authentication and authorization with Microsoft Entra ID (Azure AD) Design and implement multi-tenant infrastructure patterns that ensure domain-level isolation and secure data access Build mechanisms for cost attribution and usage visibility per domain and use case team Implement fine-grained access control, data governance, and monitoring for domains with varying sensitivity (e.g., clinical trials) Automate deployment pipelines and environment provisioning for dev, test, and production environments Collaborate with platform architects, domain engineers, and data governance teams to curate and standardize ontologies Minimum Requirements 4 - 9 years of experience in Software / Platform Engineering, DevOps, or Cloud Infrastructure roles Proficiency in Python for automation, tooling, or API integration Hands-on experience with AWS EKS / ECS and associated services (IAM, S3, CloudWatch, etc.) Strong skills in Terraform / Ansible / IaC for infrastructure provisioning and configuration Familiarity with RBAC, OIDC, and Microsoft Entra ID integration for enterprise IAM Understanding of Kubernetes multi-tenancy and security best practices Experience building secure and scalable platforms supporting multiple teams or domains Preferred Qualifications Experience deploying or managing Anzo, Altair Graph Studio, or other knowledge graph / semantic layer tools Familiarity with RDF, SPARQL, or ontologies in an enterprise context Knowledge of data governance, metadata management, or compliance frameworks Exposure to cost management tools like AWS Cost Explorer / Kubecost or custom chargeback systems Why Join Us Be part of a cutting-edge initiative shaping enterprise-wide data access and semantics Work in a cross-functional, highly collaborative team focused on responsible innovation Influence the architecture and strategy of a foundational platform from the ground up
Posted 1 month ago
8.0 - 13.0 years
10 - 20 Lacs
Noida, Hyderabad, Pune
Hybrid
Oracle HRMS Tech Consultant (8+ yrs) with exp in EBS R12, Core HR, Payroll, OTL, SSHR, OLM, Fast Formula, PL/SQL, XML/RDF, Wrkflw, Absnce Mgmt.C2H@TE Infotech(Oracle India)converted to permanent.Loc: BLR/HYD/CHN/PUN/NOI.Apply:ssankala@toppersedge.com
Posted 1 month ago
4.0 - 6.0 years
7 - 10 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be part of Researchs Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. Basic Qualifications and Experience: Doctorate Degree OR Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation Expertise: Good experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-Solving: Excellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 1 month ago
6.0 - 8.0 years
11 - 15 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Employment type: Freelance, Project Based What this is about: At e2f, we offer an array of remote opportunities to work on compelling projects aimed at enhancing AI capabilities As a significant team member, you will help shape the future of AI-driven solutions We value your skills and domain expertise, offering competitive compensation and flexible working arrangements We are looking for an experienced Data Analyst with a strong background in SPARQL for a project-based position The ideal candidate will be responsible for writing, reviewing, and optimizing the queries to extract valuable insights from our knowledge base Qualifications: Bachelor's degree in Computer Science, Data Science, or a related field Proven experience with SPARQL Familiarity with Cypher query languages Expertise in Knowledge Graphs Strong analytical and problem-solving skills Excellent communication and collaboration skills Ability to prioritize and manage workload efficiently Understanding of and adherence to project guidelines and policies Responsibilities: You can commit a minimum of 4 hours per day Flexible schedule (You can split your hours as you prefer) Participate in a training meeting Adhere to deadlines and guideline standards What We Offer: Engage in exciting generative AI development from the convenience of your home Enjoy flexible work hours and availability If you're interested: Apply to our job advertisement We'll review your profile and, if it aligns with our search, we will contact you as soon as possible to share rates and further details About Us: e2f is dedicated to facilitating natural communication between people and machines across languages and cultures With expertise in data science, we provide top-tier linguistic datasets for AI and NLP projects Know more here: www e2f com Show more Show less
Posted 1 month ago
1.0 - 4.0 years
2 - 6 Lacs
Mumbai, Pune, Chennai
Work from Office
Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad
Posted 1 month ago
3.0 - 5.0 years
37 - 45 Lacs
Bengaluru
Work from Office
: Job TitleSenior Data Science Engineer Lead LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, Vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring Your skills and experience 15+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or ecommerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 1 month ago
5.0 - 8.0 years
2 - 6 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough