Jobs
Interviews

75 Semantic Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 - 14.0 years

7 - 12 Lacs

bengaluru

Work from Office

Execute end-to-end Data Science projects including data collection, preprocessing, feature engineering, modelling, evaluation, and deployment. Design and implement advanced Machine Learning algorithms (classification, regression, clustering, ensemble methods) and Natural Language Processing algorithms (Knowledge Graphs, Topic Modelling, Feature Extraction, Sentiment Analysis, BERT etc.) Develop and deploy Generative AI and LLM-based solutions using platforms like OpenAI, Hugging Face, and LLama. Apply Agentic AI frameworks such as LangChain, LangGraph, Crew AI, or Microsoft Semantic Kernel to build intelligent applications. Proficient in designing and deploying scalable machine learning solutions using cloud architectures, with hands-on experience in at least one major platform (Azure, AWS, GCP, or IBM Cloud). Skilled in leveraging Databricks, ML platforms, managed databases, web hosting services, and document AI tools for end-to-end solution development. Collaborate with business stakeholders to define problem statements, deliver insights, and drive impact. Maintain reproducibility and version control using Git, GitHub Effectively communicate technical concepts and project outcomes to both technical and non-technical stakeholders. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 7–9 years of hands-on experience in Data Science, Machine Learning, Deep Learning, and NLP in a production environment. 2+ years of applied experience in Generative AI and LLMs (e.g., OpenAI, LLama). Proficiency in agentic AI development using frameworks like LangChain, LangGraph, or similar. Strong programming skills in Python and experience with ML/DL libraries Experience deploying models via REST APIs or web applications. Proficient in SQL for data extraction, transformation, and analysis. Experience working with large datasets, feature engineering, and data preprocessing pipelines. Solid understanding of model evaluation, cross-validation, and performance metrics. Experience in MLOps, including model testing, deployment pipelines, governance frameworks, and continuous monitoring for reliable and compliant machine learning operations. Experience with cloud ML pipelines and services (proficiency in at least one of Azure, AWS, GCP & IBM Cloud). Strong interpersonal and client communication skills.

Posted 1 day ago

Apply

6.0 - 10.0 years

6 - 11 Lacs

bengaluru

Work from Office

Ready to shape the future of healthcare AI? Apply now and be a part of building the clinical agentic infrastructure for tomorrow. The OHAI Agent Engineering Team is pioneering the next generation of intelligent agent frameworks, delivering transformative solutions to elevate healthcare services and drive exceptional customer experiences. Our cutting-edge platform seamlessly integrates advanced automation, clinical intelligence, and user-centric designempowering healthcare providers to deliver proactive, personalized care and setting a new standard for operational excellence and patient delight. Join us as we shape the future of healthcare with innovation, passion, and impact. We are seeking a Senior Software Principal Engineer with deep expertise in healthcare technologies, Agentic AI frameworks, and modern data/AI infrastructure to join our Clinical AI team. This role is pivotal in shaping our next-generation Clinical Agentic AI Platform, enabling dynamic, context-aware care pathways for healthcare providers and patients. The ideal candidate will be an accomplished hands-on architect with a strong background in Python, exposure to Java, and demonstrated experience with Agentic AI, LangGraph, Vector DBs, OpenSearch, and LLM-based systems. In addition to strong system design skills, you must be capable of mentoring senior engineers, driving technical strategy, and ensuring delivery of scalable, secure, and high-performing AI services. Key Responsibilities: Architect and lead the development of clinical agent-based AI systems using LangGraph or similar frameworks. Collaborate with product and clinical informatics teams to design AI-driven care pathways and decision support systems. Define and evolve the technical roadmap , aligning with compliance, performance, and integration standards. Lead the implementation of LangGraph-based agents, integrating them with Vector Databases, OpenSearch, and other retrieval-augmented generation (RAG) pipelines. Design and guide data pipelines for LLM-based applications, ensuring real-time contextual understanding. Mentor and coach senior and mid-level engineers; foster a culture of technical excellence and innovation. Conduct design reviews, enforce code quality standards, and ensure scalable, maintainable solutions. Collaborate with platform teams on observability, deployment automation, and runtime performance optimization. Stay current with evolving AI trends, particularly in the LLMOps / Agentic AI space, and evaluate emerging tools and frameworks. Required Qualifications: 10+ years of hands-on software development experience; at least 3 years in an architect-level role. Proven experience building and scaling agent-based AI applications, ideally in clinical or healthcare contexts. Strong proficiency in Python (primary language); working knowledge of Java is a plus. Solid experience with LangGraph or similar agentic orchestration frameworks. Deep understanding of LLMs, prompt engineering, retrieval-augmented generation (RAG), and semantic search. Hands-on with Vector Databases (e.g., FAISS, Weaviate, Pinecone) and OpenSearch/ElasticSearch. Knowledge of healthcare standards (e.g., FHIR, HL7) and regulatory considerations (HIPAA, HITRUST) is highly desirable. Track record of designing large-scale, distributed systems with high availability and performance. Experience guiding cross-functional engineering teams through architecture, design, and code reviews. Preferred Qualifications: Experience building clinical decision support systems, EHR integrations, or AI-based triage tools. Familiarity with LangChain, Haystack, FastAPI, gRPC, and OCI or any other publiccloud environments. Strong communication skills and the ability to translate complex technical topics for diverse stakeholders. Background in mentoring, growing engineering talent, and establishing design best practices Why Join Us? Join a mission-driven team transforming healthcare through agentic AI and intelligent automation. Work on cutting-edge technology that directly impacts clinical workflows and patient outcomes. Collaborate with thought leaders in healthcare, AI, and software architecture. Competitive compensation, equity, and comprehensive benefits.

Posted 1 day ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

pune

Work from Office

Role TitleSenior Frontend Engineer Reports To Engineering Manager / Frontend Architect Location Pune- Kharadi Experience Required 8+ Years Role Summary As a Senior Frontend Engineer, you will lead the development of high-performance, scalable, and accessible web components using modern JavaScript, TypeScript, and web standards. You'll architect and maintain design systems, implement robust testing strategies, and ensure seamless user experience through reusable UI components and frameworks. Your work will be central to building reliable, maintainable, and user-friendly interfaces. Key Responsibilities Design, develop, and maintain reusable web components using modern web standards. Architect and evolve scalable design systems using CSS custom properties, theming, and component APIs. Implement and maintain CI-driven testing using Jest, Storybook, and visual regression tools. Optimize frontend builds through bundling, tree shaking, semantic versioning, and monorepo strategies. Ensure accessibility compliance (WCAG, a11y) and integrate accessibility testing into the workflow. Collaborate with cross-functional teams (designers, backend engineers, QA) to deliver seamless features. Monitor application performance and implement improvements proactively. Required Qualifications & Skills Proficiency in TypeScript , JavaScript , and modern ES modules . Expertise in web component standards , lifecycle methods , reactive properties , and component APIs . Strong grasp of Vite , npm , monorepo architecture , and semantic versioning . Deep understanding of CSS custom properties , slots , and theming strategies. Hands-on experience in unit testing , component testing , interaction and visual regression testing . Familiarity with Storybook , Jest , testing automation , and accessibility audits . Soft Skills Strong problem-solving and architectural thinking. Attention to detail and a commitment to code quality. Excellent communication and documentation skills. Collaborative mindset with experience working in agile teams. Initiative-driven and self-organized in a fast-paced environment. Preferred Qualifications Experience building and scaling design systems . Contributions to open-source frontend tools or libraries. Knowledge of micro-frontend or modular architecture . Familiarity with CI/CD pipelines and DevOps practices. Key Relationships Internal : Product Managers UX/UI Designers Backend Developers QA/Test Automation Engineers External : Design System Contributors Accessibility Auditors Vendors/Third-party Tool Providers Role Dimensions Individual Contributor with potential to lead components or junior developers. Influencer in technical direction and design system evolution. Contributor to frontend quality standards and testing best practices. Success Measures (KPIs) Timely and high-quality delivery of reusable components. Coverage and performance of automated testing suites. Accessibility compliance across all UI features. Reduction in UI defects and production incidents. Contribution to documentation and internal knowledge sharing. Competency Framework Alignment Competency Area Expected Proficiency Technical Expertise Deep understanding of frontend and testing tools Code Quality & Testing Drives high coverage and automation culture Communication Clearly articulates decisions and issues Problem Solving Resolves complex UI and architecture challenges Collaboration Works well in cross-functional teams

Posted 4 days ago

Apply

9.0 - 14.0 years

17 - 22 Lacs

gurugram

Work from Office

Execute end-to-end Data Science projects including data collection, preprocessing, feature engineering, modelling, evaluation, and deployment. Design and implement advanced Machine Learning algorithms (classification, regression, clustering, ensemble methods) and Natural Language Processing algorithms (Knowledge Graphs, Topic Modelling, Feature Extraction, Sentiment Analysis, BERT etc.) Develop and deploy Generative AI and LLM-based solutions using platforms like OpenAI, Hugging Face, and LLama. Apply Agentic AI frameworks such as LangChain, LangGraph, Crew AI, or Microsoft Semantic Kernel to build intelligent applications. Proficient in designing and deploying scalable machine learning solutions using cloud architectures, with hands-on experience in at least one major platform (Azure, AWS, GCP, or IBM Cloud). Skilled in leveraging Databricks, ML platforms, managed databases, web hosting services, and document AI tools for end-to-end solution development. Collaborate with business stakeholders to define problem statements, deliver insights, and drive impact. Maintain reproducibility and version control using Git, GitHub Effectively communicate technical concepts and project outcomes to both technical and non-technical stakeholders. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 7–9 years of hands-on experience in Data Science, Machine Learning, Deep Learning, and NLP in a production environment. 2+ years of applied experience in Generative AI and LLMs (e.g., OpenAI, LLama). Proficiency in agentic AI development using frameworks like LangChain, LangGraph, or similar. Strong programming skills in Python and experience with ML/DL libraries Experience deploying models via REST APIs or web applications. Proficient in SQL for data extraction, transformation, and analysis. Experience working with large datasets, feature engineering, and data preprocessing pipelines. Solid understanding of model evaluation, cross-validation, and performance metrics. Experience in MLOps, including model testing, deployment pipelines, governance frameworks, and continuous monitoring for reliable and compliant machine learning operations. Experience with cloud ML pipelines and services (proficiency in at least one of Azure, AWS, GCP & IBM Cloud). Strong interpersonal and client communication skills.

Posted 6 days ago

Apply

3.0 - 6.0 years

4 - 8 Lacs

bengaluru

Work from Office

Your Role As a React JS Developer, you will be responsible for designing and developing high-performance web applications from scratch. You will work closely with product managers, designers, and backend developers to create seamless and scalable user interfaces that enhance user experience and meet business objectives. In This Role You Will Play a Key Role In: Designing and developing modern web applications using React JS , Redux , and ES6 JavaScript . Building a strong UI foundation with reusable components and scalable architecture. Evaluating and understanding business and functional requirements to deliver effective solutions. Applying best practices in web design, architecture, and performance optimization. Leading small UI teams and mentoring junior developers. Collaborating in an Agile/Scrum environment using tools like Git , Bitbucket , JIRA , and Confluence . Top 5 Technical Skills React JS Expertise in building dynamic and responsive user interfaces. TypeScript & ES6 Strong understanding of modern JavaScript and type-safe development. Jest Experience in writing and maintaining unit tests for frontend components. HTML5 / CSS3 Proficient in creating semantic, accessible, and responsive layouts. Redux Skilled in managing complex application state efficiently. What you"ll love about working here You can shape yourcareerwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges.

Posted 6 days ago

Apply

5.0 - 7.0 years

20 - 25 Lacs

hyderabad

Work from Office

Role: Senior Data Scientist Location: Hyderabad Notice Period: Immediate to 15days About the Role We are looking for a Senior Data Scientist to design and build intelligent chatbot systems that can interpret natural language queries and interact with structured and unstructured data using Text-to-SQL, RAG (Retrieval-Augmented Generation), and Agentic reasoning. Youll lead the development of multi-modal, context-aware, and autonomous agents that use large language models to query databases, invoke tools, retrieve knowledge, and plan actions, all within enterprise-grade environments. Key Responsibilities Build Text-to-SQL agents that convert natural language into accurate, executable SQL across various schemas. Design RAG pipelines to enrich LLM outputs using internal documents, tables, and API-based retrievers. Develop agentic frameworks that enable tool use, database interaction, planning, and task decomposition using LLMs. Tune prompts and build evaluation systems for accuracy, latency, grounding, and correctness. Work with product and data teams to build domain-specific few-shot examples, tool definitions, and vectorized schema representations. Optimize inference latency using frameworks like LangChain, Semantic Kernel, or Llama Index. Required Skills 4+ years in Data Science or NLP, with 2+ years in LLM-based application development. Experience with Text-to-SQL modeling (Spider, BIRD, etc.) and fine-tuning using synthetic or annotated query datasets. Strong understanding of LLM orchestration tools (LangChain, Semantic Kernel, or custom agents). Hands-on with RAG architectures: embedding generation, retriever tuning, chunking strategies, and reranking. Proficiency in Python, SQL, and vector search tools (FAISS, Pinecone, Weaviate, OpenSearch). Familiarity with prompt tuning, structured output parsing, and function calling (OpenAI, Mistral, Anthropic, etc.). Leadership & Collaboration Lead experimentation and architectural design for agent-based workflows (SQL agents, document agents, hybrid search agents). Mentor junior team members in LLM prompt engineering, schema design, and text- to-structure pipelines. Collaborate with ML engineers, product managers, and data owners to implement scalable conversational AI interfaces. Nice to Have Experience with vLLM, LoRA fine-tuning, or quantized model deployment. Prior work in multi-agent planning, self-healing agents, or semantic query generation. Familiarity with tools like SQLGlot, DSPy, DeepEval, and OpenAI function tools. Knowledge of structured RAG (combining structured + unstructured sources) and hybrid ranking.

Posted 1 week ago

Apply

12.0 - 17.0 years

22 - 27 Lacs

bengaluru

Work from Office

Execute end-to-end Data Science projects including data collection, preprocessing, feature engineering, modelling, evaluation, and deployment. Design and implement advanced Machine Learning algorithms (classification, regression, clustering, ensemble methods) and Natural Language Processing algorithms (Knowledge Graphs, Topic Modelling, Feature Extraction, Sentiment Analysis, BERT etc.) Develop and deploy Generative AI and LLM-based solutions using platforms like OpenAI, Hugging Face, and LLama. Apply Agentic AI frameworks such as LangChain, LangGraph, Crew AI, or Microsoft Semantic Kernel to build intelligent applications. Proficient in designing and deploying scalable machine learning solutions using cloud architectures, with hands-on experience in at least one major platform (Azure, AWS, GCP, or IBM Cloud). Skilled in leveraging Databricks, ML platforms, managed databases, web hosting services, and document AI tools for end-to-end solution development. Collaborate with business stakeholders to define problem statements, deliver insights, and drive impact. Maintain reproducibility and version control using Git, GitHub Effectively communicate technical concepts and project outcomes to both technical and non-technical stakeholders. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 10–12 years of hands-on experience in Data Science, Machine Learning, Deep Learning, and NLP in a production environment. 2+ years of applied experience in Generative AI and LLMs (e.g., OpenAI, LLama). Proficiency in agentic AI development using frameworks like LangChain, LangGraph, or similar. Strong programming skills in Python and experience with ML/DL libraries Experience deploying models via REST APIs or web applications. Proficient in SQL for data extraction, transformation, and analysis. Experience working with large datasets, feature engineering, and data preprocessing pipelines. Solid understanding of model evaluation, cross-validation, and performance metrics. Experience in MLOps, including model testing, deployment pipelines, governance frameworks, and continuous monitoring for reliable and compliant machine learning operations. Experience with cloud ML pipelines and services (proficiency in at least one of Azure, AWS, GCP & IBM Cloud). Strong interpersonal and client communication skills.

Posted 1 week ago

Apply

9.0 - 14.0 years

17 - 22 Lacs

bengaluru

Work from Office

Execute end-to-end Data Science projects including data collection, preprocessing, feature engineering, modelling, evaluation, and deployment. Design and implement advanced Machine Learning algorithms (classification, regression, clustering, ensemble methods) and Natural Language Processing algorithms (Knowledge Graphs, Topic Modelling, Feature Extraction, Sentiment Analysis, BERT etc.) Develop and deploy Generative AI and LLM-based solutions using platforms like OpenAI, Hugging Face, and LLama. Apply Agentic AI frameworks such as LangChain, LangGraph, Crew AI, or Microsoft Semantic Kernel to build intelligent applications. Proficient in designing and deploying scalable machine learning solutions using cloud architectures, with hands-on experience in at least one major platform (Azure, AWS, GCP, or IBM Cloud). Skilled in leveraging Databricks, ML platforms, managed databases, web hosting services, and document AI tools for end-to-end solution development. Collaborate with business stakeholders to define problem statements, deliver insights, and drive impact. Maintain reproducibility and version control using Git, GitHub Effectively communicate technical concepts and project outcomes to both technical and non-technical stakeholders. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 7–9 years of hands-on experience in Data Science, Machine Learning, Deep Learning, and NLP in a production environment. 2+ years of applied experience in Generative AI and LLMs (e.g., OpenAI, LLama). Proficiency in agentic AI development using frameworks like LangChain, LangGraph, or similar. Strong programming skills in Python and experience with ML/DL libraries Experience deploying models via REST APIs or web applications. Proficient in SQL for data extraction, transformation, and analysis. Experience working with large datasets, feature engineering, and data preprocessing pipelines. Solid understanding of model evaluation, cross-validation, and performance metrics. Experience in MLOps, including model testing, deployment pipelines, governance frameworks, and continuous monitoring for reliable and compliant machine learning operations. Experience with cloud ML pipelines and services (proficiency in at least one of Azure, AWS, GCP & IBM Cloud). Strong interpersonal and client communication skills.

Posted 1 week ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives.Experience:- Overall IT experience (No of years) - 7+- Data Modeling Experience - 3+- Data Vault Modeling Experience - 2+ Key Responsibilities:- Drive discussions with clients deal teams to understand business requirements, how Data Model fits in implementation and solutioning- Develop the solution blueprint and scoping, estimation in delivery project and solutioning- Drive Discovery activities and design workshops with client and lead strategic road mapping and operating model design discussions- Design and develop Data Vault 2.0-compliant models, including Hubs, Links, and Satellites.- Design and develop Raw Data Vault and Business Data Vault.- Translate business requirements into conceptual, logical, and physical data models.- Work with source system analysts to understand data structures and lineage.- Ensure conformance to data modeling standards and best practices.- Collaborate with ETL/ELT developers to implement data models in a modern data warehouse environment (e.g., Snowflake, Databricks, Redshift, BigQuery).- Document models, data definitions, and metadata. Technical Experience:Good to have Skills: - 7+ year overall IT experience, 3+ years in Data Modeling and 2+ years in Data Vault Modeling- Design and development of Raw Data Vault and Business Data Vault.- Strong understanding of Data Vault 2.0 methodology, including business keys, record tracking, and historical tracking.- Data modeling experience in Dimensional Modeling/3-NF modeling- Hands-on experience with any data modeling tools (e.g., ER/Studio, ERwin, or similar).- Solid understanding of ETL/ELT processes, data integration, and warehousing concepts.- Experience with any modern cloud data platforms (e.g., Snowflake, Databricks, Azure Synapse, AWS Redshift, or Google BigQuery).- Excellent SQL skills. Good to Have Skills: - Any one of these add-on skills - Graph Database Modelling, RDF, Document DB Modeling, Ontology, Semantic Data Modeling- Hands-on experience in any Data Vault automation tool (e.g., VaultSpeed, WhereScape, biGENIUS-X, dbt, or similar).- Preferred understanding of Data Analytics on Cloud landscape and Data Lake design knowledge.- Cloud Data Engineering, Cloud Data Integration Professional Experience:- Strong requirement analysis and technical solutioning skill in Data and Analytics - Excellent writing, communication and presentation skills.- Eagerness to learn new skills and develop self on an ongoing basis.- Good client facing and interpersonal skills Educational Qualification:- B.E or B.Tech must Qualification 15 years full time education

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

hyderabad

Work from Office

About the Role: Grade Level (for internal use): 11 The Role : The Knowledge Engineering team are seeking a Lead Knowledge Engineer to support our strategic transformation from a traditional data organization into a next generation interconnected data intelligence organization. The Team : The Knowledge Engineering team within data strategy and governance helps to lead fundamental organizational and operational change driving our linked data, open data, and data governance strategy, both internally and externally. The team partners closely with data and software engineering to envision and build the next generation of data architecture and tooling with modern technologies. The Impact : Knowledge Engineering efforts occur within the broader context of major strategic initiatives to extend market leadership and build next-generation data, insights and analytics products that are powered by our world class datasets. Whats in it for you : The Lead Knowledge Engineer role is an opportunity to work as an individual contributor in creatively solving complex challenges alongside visionary leadership and colleagues. Its a role with highly visible initiatives and outsized impact. The wider division has a great culture of innovation, collaboration, and flexibility with a focus on delivery. Every person is respected and encouraged to be their authentic self. Responsibilities : Develop, implement, and continue to enhance ontologies, taxonomies, knowledge graphs, and related semantic artefacts for interconnected data, as well as topical/indexed query, search, and asset discovery Design and prototype data / software engineering solutions enabling to scale the construction, maintenance and consumption of semantic artefacts and interconnected data layer for various application contexts Provide thought leadership for strategic projects ensuring timelines are feasible, work is effectively prioritized, and deliverables met Influence the strategic semantic vision, roadmap, and next-generation architecture Execute on the interconnected data vision by creating linked metadata schemes to harmonize semantics across systems and domains Analyze and implement knowledge organization strategies using tools capable of metadata management, ontology management, and semantic enrichment Influence and participate in governance bodies to advocate for the use of established semantics and knowledge-based tools Qualifications: Able to communicate complex technical strategies and concepts in a relatable way to both technical and non-technical stakeholders and executives to effectively persuade and influence 5+ years of experience with ontology development, semantic web technologies (RDF, RDFS, OWL, SPARQL) and open-source or commercial semantic tools (e.g., VocBench, TopQuadrant, PoolParty, RDFLib, triple stores); Advanced studies in computer science, knowledge engineering, information sciences, or related discipline preferred 3+ years of experience in advanced data integration with semantic and knowledge graph technologies in complex, enterprise-class, multi-system environment(s); skilled in all phases from conceptualization to optimization Programming skills in a mainstream programming language (Python, Java, JavaScript), with experience in utilizing cloud services (AWS, Google Cloud, Azure) is a great bonus Understanding of the agile development life cycle and the broader data management discipline (data governance, data quality, metadata management, reference and master data management) S&P Global Enterprise Data Organization is a unified, cross-divisional team focused on transforming S&P Globals data assets. We streamline processes and enhance collaboration by integrating diverse datasets with advanced technologies, ensuring efficient data governance and management. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visit Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ---- S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ---- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), DTMGOP103.2 - Middle Management Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply

2.0 - 7.0 years

6 - 10 Lacs

bengaluru

Work from Office

Youll create impact by: PhD or masters degree in computer science from a recognized institution, with at least 2 years of hands-on experience in designing and implementing algorithms and solving complex problems using Artificial Intelligence Proficiency in one or more of the following technologies is a mandatory requirement: Demonstrated expertise in training deep convolutional and recurrent neural networks with industry-standard frameworks like TensorFlow, Caffe, and PyTorch Demonstrated expertise in demonstrating OpenCV and OpenGL for developing computer vision and graphical rendering solutions Conduct research and prototype development in the areas of object detection, visual tracking, semantic segmentation, human action recognition, 3D scene reconstruction, and Simultaneous Localization and Mapping (SLAM). Hands-on experience working with multimodal data, including audio, video, images, text, and telemetry Experience in image and video processing, analytics, and computer vision, with a strong understanding of available algorithms and model architectures, and the ability to implement them as neededparticularly deep learning algorithms Shows strength in the landscape of foundation models and a proven understanding of their advantages and limitations. Experience in adapting algorithms and architectures for compute-constrained environments and embedded systems is a plus Familiarity with SaaS fundamentals is considered an advantage Experience in solution design, architecture, software packaging using Docker and Kubernetes, and deployment on cloud platforms is a plus Eager to collaborate with multi-functional teams from conceptualization and prototyping to communicating solutions and recommendations to collaborators, while actively shaping the technology roadmap and strategic direction of the portfolio Keep abreast of the latest advancements in artificial intelligence and proactively adopt emerging, pioneering technologies Highly upbeat and committed to going the extra mile to achieve goals, while actively promoting AI innovation both within and beyond Siemens. Join Siemens: We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Protecting the environment, conserving our natural resources, encouraging the health and performance of our people as well as safeguarding their working conditions are core to our social and business dedication at Siemens.

Posted 1 week ago

Apply

4.0 - 7.0 years

9 - 13 Lacs

pune

Work from Office

Job Purpose We are seeking a dynamic AI Engineer to join our pioneering Agentic AI team. The ideal candidate will possess a strong foundation in model development, UI development and development of APIs. You will be involved in end-to-end stage of development life cycle. Duties and Responsibilities Deliveries with respect to Agentic AI Platform: Build and maintain autonomous software agents using state-of-the-art LLM frameworks. Collaborate with product owners and domain experts to build reusable components for business process automation. Develop core infrastructure and reusable components to support the deployment of agent-based AI systems. Work on agent orchestration, prompt engineering, and LLM-powered integrations. Implement scalable solutions integrated with CRM systems and enterprise data platforms. Contribute to the design of modular, extensible, enterprise-grade architectures. Fine-tune and evaluate AI agents for speed, accuracy, performance and maintainability across business units. Contribute to CI/CD automation and maintain operational stability of agent services.Generative AI & Model Optimization: Fine-tune LLMs/SLMs with proprietary NBFC data. Perform distillation, quantization of LLMs for edge deployment. Evaluate and run LLM/SLM models on local/edge server machines.Self-Learning Frameworks: Build self-learning systems that adapt without full retraining (e.g., learn new rejection patterns from calls). Implement lightweight local models to enable real-time learning on the edge. Key Decisions / Dimensions. Platform Design & Delivery, Model Selection, Customization (if any) & Testing: Choosing the right model for various agentic and autonomous actions. Selecting appropriate model so that Agents can complete the autonomous tasks in efficient manner. Defining reusable components in the platform. Delivery using configuration approach should be first preference, in case that does not work should go for customization. Defining configuration parameters and incorporate them as platform design. Testing and end to end testing of the project deliverable. Load balancing between different models. Always have switch on/switch off feature. Must have all services backed up on primary/HA & DR servers.Prompt Engineering: Prompt Design & Development: Crafting prompts that guide AI systems to produce desired outputs for various applications, such as text generation, translation, question answering, and creative writing. Testing and Evaluation: Analysing the effectiveness of prompts and refining them based on results to ensure accurate and relevant responses. Bias Mitigation: Designing prompts that minimize bias and ensure fair and equitable outcomes from AI systems. Major Challenges Support from other platform owners Agents must learn from failed interactions Building a Agents that doesn't just answer but negotiates with human-like reasoning. Running large AI models in low-latency, low-bandwidth environments without cloud dependency. Getting the end-to-end domain knowledge Managing data and information security of the agentic application. Required Qualifications and Experience Educational Qualifications Educational Background: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Experience: 47 years of experience in AI/ML, with exposure to Python, Node.JS, JavaScript, React, Java. Good understanding of Google GCP/Microsoft Foundry + MS CoPilot Studio/ Agentforce or Equivalent. Strong programming s in languages such as Python, Node.JS, JavaScript, React, Java. Familiarity with LangChain, Semantic Kernel, CrewAI, or LangGraph. Experience building with or integrating LLMs for task automation, reasoning, or autonomous workflows Understanding of Microsoft Copilot Studio, BigQuery, Power Apps, and Power BI Familiarity with the Agent Development Kit (ADK) and MCP (Multi-agent Communication Protocol) Strong understanding of prompt engineering, tool calling, and agent orchestration.

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

ahmedabad

Remote

What We’re Looking For: 3–4 years of experience in AI/ML, NLP, or Intelligent Automation Hands-on Python + Agentic AI frameworks (LangChain / Semantic Kernel / AutoGen / CrewAI etc.) Knowledge of LLM APIs, Vector Databases, RAG architectures

Posted 2 weeks ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

mysuru, pune, bengaluru

Hybrid

Required Skills & Experience: Mandatory: Hands-on experience with MarkLogic. Strong expertise in Apache NiFi for data integration and orchestration. Cloud experience, preferably Azure (Data Factory, Data Lake, Functions, etc.). Proven experience in REST API design & development. Familiarity with Git Bash, Gradle, and any modern IDE (IntelliJ, Eclipse, VS Code, etc.). Prior experience working as part of a DevOps team with CI/CD practices. Excellent problem-solving and communication skills. Madatory: Marklogics

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 20 Lacs

bengaluru

Work from Office

Project Role : Integration Architect Project Role Description : Architect an end-to-end integration solution. Drive client discussions to define the integration requirements and translate the business requirements to the technology solution. Activities include mapping business processes to support applications, defining the data entities, selecting integration technology components and patterns, and designing the integration architecture. Must have skills : AI Agents & Workflow Integration Minimum 18 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a C-suite - facing Industrial AI & Agentic Systems Lead to architect, govern, and scale AI solutions - including AI, multi-agent, LLM-driven, tool-using autonomous systems - across manufacturing, supply chain, and plant operations. You will define the strategy-to-scale journey from high-value use case selection (OEE, yield, PdM, energy, scheduling, autonomous quality) to edge - cloud architectures, MLOps/LLMOps, Responsible & Safe AIAgentic AI, and IT/OT convergence, delivering hard business outcomes. Roles & Responsibilities:1.Strategy & C-Suite Advisory:-Define an Industrial AI + Agentic AI strategy & roadmap tied to OEE, yield, cost, throughput, energy, and sustainability KPI - with ROI/payback models.-Shape operating models (central CoE vs. federated), governance, funding, and product-platform scaling approaches.-Educate CxO stakeholders on where Agentic AI adds leverage (closed-loop optimization, autonomous workflows, human-in-the-loop decisioning).2.Architecture & Platforms:- Design edge- plant - cloud reference architectures for ML + Agentic AI:data ingestion (OPC UA, MQTT, Kafka), vector DB/RAG layers, model registries, policy engines, observability, and safe tool execution- Define LLMOps patterns for prompt/version management, agent planning/execution traces, tool catalogs, guardrails, and evaluation harnesses.3.Agentic AI (Dedicated):- Architect multi-agent systems (planner- solver - critic patterns) for:SOP generation & validation, root-cause analysis & corrective action recommendation, autonomous scheduling & rescheduling, MRO/work order intelligence, control room copilots orchestrating OT/IT tools.- Design tooling & action interfaces (function calling tools registry) to safely let agents interact with MES/ERP/CMMS/SCADA/DCS, simulations (DES, digital twins), and optimization solvers (cuOpt, Gurobi, CP-SAT).- Establish policy, safety, and constraints frameworks (role-based agent scopes, allow/deny tool lists, human-in-the-loop gates, audit trails).-Implement RAG + knowledge graph + vector DB stacks for engineering/service manuals, logs, SOPs, and quality records to power grounded agent reasoning.-Set up evaluation & red-teaming for agent behaviors:hallucination tests, unsafe action prevention, KPI-driven performance scoring.4.Use Cases & Solutions (Manufacturing Focus):- Computer Vision Autonomous Quality (TAO, Triton, TensorRT) with agentic triage & escalation to quality engineers.- Predictive/Prescriptive Maintenance with agents orchestrating data retrieval, work order creation, spare part planning.- Process & Yield Optimization where agents run DOE, query historians, simulate scenarios (digital twins), recommend set-point changes.- Scheduling Throughput Optimization with planner - optimizer agents calling OR/RL solvers.- GenAI/LLM for Manufacturing:copilots & autonomous agents for SOPs, RCA documentation, PLC/SCADA code refactoring (with strict guardrails).5.MLOps, LLMOps, Edge AI & Runtime Ops:- Stand up MLOps + LLMOps:CI/CD for models & prompts, drift detection, lineage, experiment & agent run tracking, safe rollback.- Architect Edge AI on NVIDIA Jetson/IGX, x86 GPU, Intel iGPU/OpenVINO, ensuring deterministic latency, TSN/real-time where needed.- Implement observability for agents (traces, actions, rewards/scores, SLA adherence).6.Responsible Safe AI, Compliance & Security:- Codify Responsible AI and Agentic Safety policies:transparency, explainability (XAI), auditability, IP protection, privacy, toxicity & jailbreak prevention.- Align with regulations (e.g., GxP, FDA 21 CFR Part 11, ISO 27001, IEC 62443, ISO 26262, AS9100) for industrial domains.7.Delivery, GTM & Thought Leadership:- Serve as chief architect design authority on large AI + Agentic programs; mentor architects, data scientists/engineers, and MLOps/LLMOps teams.- Lead pre-sales, solution shaping, executive storytelling, and ecosystem partnership building (NVIDIA, hyperscalers, MES/SCADA, optimization, cybersecurity). Professional & Technical Skills: Must have Skills:- Proven AI at scale delivery record in manufacturing with quantified value and hands-on leadership of LLM/Agentic AI initiatives.- Deep understanding of shop-floor tech (MES/MOM, SCADA/DCS, historians- PI/AVEVA, PLC/RTUs), protocols (OPC UA, MQTT, Modbus, Kafka).- Expertise in ML & CV stacks (PyTorch/TensorFlow, Triton, TensorRT, TAO Toolkit) and LLM/Agentic stacks (function calling, RAG, vector DBs, prompt/agent orchestration).- MLOps & LLMOps (MLflow, Kubeflow, SageMaker/Vertex, Databricks, Feast, LangSmith/Evaluation frameworks, guardrails).- Edge AI deployment on NVIDIA/Intel/x86 GPUs, with K8s/K3s, Docker, Triton Inference Server.- Strong security & governance for IT/OT and AI/LLM (IEC 62443, Zero Trust, data residency, key/token vaults, prompt security).- Executive communication:convert complex AI+Agentic architectures into board-level impact narratives Good to have skills:- Agentic frameworks:LangGraph, AutoGen, CrewAI, Semantic Kernel, Guardrails, LMQL.- Optimization & RL:cuOpt, Gurobi, OR-Tools, RLlib, Stable Baselines.- Digital Twins & Simulation:NVIDIA Omniverse/Isaac/Modulus, AnyLogic, AspenTech, Siemens.- Knowledge graphs & semantics:Neo4j, RDF/OWL, SPARQL, ontologies for manufacturing.- Standards & frameworks:ISA-95, RAMI 4.0, MIMOSA, ISO 8000, DAMA-DMBOK.- Experience in regulated sectors (Pharma/MedTech, Aero/Defense, Automotive).- AI/ML/LLM:PyTorch, TensorFlow, ONNX, Triton, TensorRT, TAO Toolkit, RAPIDS,LangChain/LangGraph, AutoGen, Semantic Kernel, Guardrails, OpenVINO.- MLOps/LLMOps/DataOps:MLflow, Kubeflow, SageMaker, Vertex AI, Databricks, Feast, Airflow/Prefect, Great Expectations, LangSmith, PromptLayer.- Edge/OT:NVIDIA Jetson/IGX, K3s/K8s, Docker, OPC UA, MQTT, Ignition, PI/AVEVA, ThingWorx.- Data/Streaming/RAG:Kafka, Flink/Spark, Delta/Iceberg/Hudi, Snowflake/BigQuery/Synapse- Vector DBs (Milvus, FAISS, Qdrant, Weaviate), KG (Neo4j).- Cloud :AWS/Azure/GCP(at least one at expert level), Kubernetes, Security (CISSP/IEC 62443) a plus.- Lean/Six Sigma/TPM nice to have credibility with operations.- Leadership & Behavioral Competencies:C-suite advisor & storyteller with outcome-first mindset.- Architectural authority balancing speed, safety, and scale.- People build across DS/ML, DE, MLOps/LLMOps, and OT.- Change leader who can operationalize AI & agents on real shop floors. Additional Info:- A minimum of 20 years of progressive information technology experience is required.- A Bachelors/master's in engineering CS Data Science (PhD preferred for R&D-heavy roles) is required.- This position is based at Bengaluru location. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

We are seeking an enthusiastic and dedicated individual to join our successful and expanding team. The ideal candidate should be a self-starter capable of working autonomously as well as collaboratively within a team environment. As a Frontend Engineer specializing in React.js, you will be responsible for developing and maintaining new and existing applications using React.js, including both CRA and Next.JS based applications. You will work closely with our development and management teams to comprehend project requirements fully. Your role will involve actively contributing to the design and implementation of ideas across the project's lifecycle, taking ownership of assigned tasks and ensuring timely delivery. The ideal candidate should possess a minimum of 5 years of practical experience in Front-End Web development using React.JS with proficiency in Javascript and Typescript. Experience with the Next.JS Framework is essential, along with a proven track record of leading and building production-ready responsive Web Applications from inception to completion. Strong communication skills are paramount, as you will be required to collaborate with Back-End and System Architecture teams to execute complex solutions effectively. Key qualifications include a deep understanding of Javascript, HTML, and CSS, as well as expertise in ReactJS fundamentals such as JSX, Virtual DOM, component lifecycle, and event handling. Experience with responsive design techniques, front end component libraries like MUI / Semantic, Formik, Redux, and familiarity with RESTful APIs/Websockets are also crucial. Proficiency in browser-based debugging, performance testing, and version control systems like GIT is expected. Desirable skills for this role include UI/UX Design experience, knowledge of performance testing frameworks (e.g., Mocha, Jest), and familiarity with modern authentication mechanisms such as JWT / Oauth2.0. If you are passionate about Frontend development, possess excellent teamwork and organizational abilities, and have a drive for delivering high-quality solutions, we encourage you to apply for this exciting opportunity.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

7 - 12 Lacs

bengaluru

Work from Office

As a Software Developer at Nokia, you will play a critical role in transforming our 5G network capabilities through innovative generative AI solutions. You will work closely with product managers, data scientists, and designers in a dynamic, agile environment that fosters creativity and teamwork. Your contributions will directly impact subscriber signaling and policy management, ensuring our applications are robust, secure, and proactive to customer needs. You have: Bachelor's or Master's degree in Computer Science, Engineering, AI, or a related field. 5 - 9 years of work experience in Python development with a focus on natural language processing. At least 1 year of experience in developing GenAI applications using prompt engineering and in-context learning. Familiarity with orchestrating frameworks like Semantic Kernel, AutoGen, Langchain, and Llamaindex. It would be nice if you also had: 1-2+ years of experience in data and analytics initiatives in a corporate or data-driven startup environment. Understanding of advanced analytics, machine learning approaches, and machine learning pipelines. Experience in fine-tuning large language models. Certification as a Cloud Developer (Microsoft, IBM Cloud, AWS, Google). Design and implement innovative generative AI solutions that align with business open-minded and optimize system performance. Integrate generative AI models into software applications, managing the end-to-end software life cycle. Utilize Python for backend development, focusing on server-side applications and API integrations. Stay abreast of AI advancements to inform improvements in application capabilities and performance. Lead documentation efforts and address production support issues to maintain system reliability.

Posted 3 weeks ago

Apply

10.0 - 13.0 years

10 - 20 Lacs

patna

Work from Office

Job Summary: We are seeking a highly skilled and experienced Data Architect Implementation (Tech Lead) to lead the design and implementation of robust data architectures across enterprise-scale projects. The ideal candidate will have more than 10 years of overall experience, with at least 3 successful project deliveries as a Data Architect or Solution Architect within India . You will collaborate with cross-functional teams, guide technical delivery, and ensure scalable, high-performance data solutions that align with business goals. Key Responsibilities: Lead the end-to-end design and implementation of enterprise data architecture. Translate business requirements into logical and physical data models. Define data strategy, architecture standards, and governance frameworks. Architect data integration solutions using ETL/ELT pipelines, APIs, and real-time data streaming. Implement data security, privacy, lineage, and compliance policies. Evaluate and recommend technologies related to data storage, processing, and analytics. Collaborate with stakeholders including business users, project managers, and development teams to deliver scalable data solutions. Guide and mentor technical teams through best practices and solution reviews. Participate in project planning, effort estimation, and risk management. Required Qualifications: 10+ years of experience in data engineering, data architecture, or solution architecture roles. Proven experience as a Data Architect / Solution Architect in at least 3 projects in India . Strong expertise in: Data modeling (conceptual, logical, physical) Data warehousing and data lakes SQL and NoSQL databases (e.g., SQL Server, PostgreSQL, MongoDB, Cassandra) ETL/ELT tools (e.g., Informatica, Talend, Apache NiFi, dbt) Big data technologies (e.g., Hadoop, Spark, Hive) Cloud platforms (e.g., AWS, Azure, GCP) and their native data services Data Governance and MDM frameworks Strong understanding of performance tuning, data security, and data lifecycle management. Experience with tools like Apache Kafka, Airflow, Power BI, Tableau, etc. is a plus. Excellent communication and leadership skills. Preferred Qualifications: Bachelor's or Masters degree in Computer Science, Information Technology, or related field. Relevant certifications (e.g., AWS/Azure Certified Data Architect, TOGAF, CDMP). Experience with Agile and DevOps methodologies. Exposure to Indian government or public sector data projects (optional but preferred).

Posted 3 weeks ago

Apply

1.0 - 6.0 years

0 - 0 Lacs

hyderabad, chennai, bengaluru

Work from Office

Role & responsibilities Job Summary: We are looking for a skilled Data Engineer / BI Developer with experience in designing and managing the semantic layer for business intelligence platforms and performing historical data loads for analytical systems. This role will support enterprise data models, enhance reporting efficiency, and ensure consistent and reliable access to historical data for decision-making. Key Responsibilities: Semantic Layer Development Design and implement a semantic layer using tools like Tableau, Power BI, Looker, SAP BO, or custom views . Translate business requirements into logical models that abstract complex data into user-friendly, business-oriented terms . Define business metrics, dimensions, hierarchies , and calculated fields to support consistent analytics. Optimize semantic layer performance for large-scale reporting and self-service BI. Historical Data Loading Design and implement processes for initial and incremental historical data loads into data warehouses or data lakes. Use ETL/ELT tools (e.g., Informatica, Talend, DBT, Azure Data Factory, AWS Glue) to load and transform large historical datasets. Ensure data accuracy, versioning , and maintain proper data lineage for historical records. Implement and maintain SCD (Slowly Changing Dimensions) and time-based data tracking logic. General Responsibilities Collaborate with data modelers, analysts, and business stakeholders to define semantic and data retention requirements. Document semantic models and historical data load processes. Troubleshoot data issues and support UAT and production deployments. Ensure compliance with data governance and data quality standards . Required Skills & Qualifications: Bachelors degree in Computer Science, Information Systems, or related field. 4+ years of experience in data engineering or BI development . Hands-on experience with semantic modeling tools (Tableau semantic layer, Power BI datasets, LookML, etc.). Proficient in SQL , data modeling, and ETL/ELT pipelines . Experience with historical data handling techniques , including SCD Type 1/2 , snapshots, and time-series data. Familiarity with data warehouse technologies (Snowflake, Redshift, BigQuery, etc.). Strong communication and documentation skills. Preferred Qualifications: Experience with data virtualization or BI metadata management tools. Cloud platform experience ( AWS, Azure, or GCP ). Understanding of data governance , security , and compliance frameworks . Knowledge of Agile/Scrum project methodologies. Preferred candidate profile

Posted 3 weeks ago

Apply

2.0 - 4.0 years

7 - 12 Lacs

bengaluru

Work from Office

As a Software Engineer, you will excel at finding innovative solutions to complex problems and experience with Kubernetes for deploying containerized ML applications, demonstrating expertise in cloud-native architectures. You will drive code refactoring and improvements across our products, ensuring their reliability and performance. Your commitment to code quality is essential for delivering high-performing software. You have: Bachelor's or Master's degree in Computer Science, Engineering, AI, or a related field. 2-4 years of experience in Python development with a focus on natural language processing. 1 year of experience in developing GenAI applications using prompt engineering and in-context learning. Knowledge of orchestrating frameworks like Semantic Kernel, AutoGen, Langchain, Llamaindex. It would be nice if you also had: 1-2+ years of experience in data and analytics initiatives in a corporate or data-driven startup environment. Understanding of advanced analytics, machine learning approaches, and machine learning pipelines. Familiarity with fine-tuning large language models. Develop high-complexity features to enhance our cloud-native ML applications. Deploy and manage Kubernetes environments for containerized ML application execution. Implement best practices in code quality and performance testing within the development lifecycle. Stay updated on advancements in Generative AI and telecommunications trends to drive innovation

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

ahmedabad

Remote

What We’re Looking For: 3–4 years of experience in AI/ML, NLP, or Intelligent Automation Hands-on Python + Agentic AI frameworks (LangChain / Semantic Kernel / AutoGen / CrewAI etc.) Knowledge of LLM APIs, Vector Databases, RAG architectures

Posted 3 weeks ago

Apply

5.0 - 9.0 years

7 - 12 Lacs

bengaluru

Work from Office

As a Software Developer at Nokia, you will play a critical role in transforming our 5G network capabilities through innovative generative AI solutions. You will work closely with product managers, data scientists, and designers in a dynamic, agile environment that fosters creativity and teamwork. Your contributions will directly impact subscriber signaling and policy management, ensuring our applications are robust, secure, and proactive to customer needs. You have: Bachelor's or Master's degree in Computer Science, Engineering, AI, or a related field. 5 - 9 years of work experience in Python development with a focus on natural language processing. At least 1 year of experience in developing GenAI applications using prompt engineering and in-context learning. Familiarity with orchestrating frameworks like Semantic Kernel, AutoGen, Langchain, and Llamaindex. It would be nice if you also had: 1-2+ years of experience in data and analytics initiatives in a corporate or data-driven startup environment. Understanding of advanced analytics, machine learning approaches, and machine learning pipelines. Experience in fine-tuning large language models. Certification as a Cloud Developer (Microsoft, IBM Cloud, AWS, Google). Design and implement innovative generative AI solutions that align with business open-minded and optimize system performance. Integrate generative AI models into software applications, managing the end-to-end software life cycle. Utilize Python for backend development, focusing on server-side applications and API integrations. Stay abreast of AI advancements to inform improvements in application capabilities and performance. Lead documentation efforts and address production support issues to maintain system reliability.

Posted 3 weeks ago

Apply

2.0 - 4.0 years

7 - 12 Lacs

bengaluru

Work from Office

As a Software Engineer, you will excel at finding innovative solutions to complex problems and experience with Kubernetes for deploying containerized ML applications, demonstrating expertise in cloud-native architectures. You will drive code refactoring and improvements across our products, ensuring their reliability and performance. Your commitment to code quality is essential for delivering high-performing software. You have: Bachelor's or Master's degree in Computer Science, Engineering, AI, or a related field. 2-4 years of experience in Python development with a focus on natural language processing. 1 year of experience in developing GenAI applications using prompt engineering and in-context learning. Knowledge of orchestrating frameworks like Semantic Kernel, AutoGen, Langchain, Llamaindex. It would be nice if you also had: 1-2+ years of experience in data and analytics initiatives in a corporate or data-driven startup environment. Understanding of advanced analytics, machine learning approaches, and machine learning pipelines. Familiarity with fine-tuning large language models. Develop high-complexity features to enhance our cloud-native ML applications. Deploy and manage Kubernetes environments for containerized ML application execution. Implement best practices in code quality and performance testing within the development lifecycle. Stay updated on advancements in Generative AI and telecommunications trends to drive innovation

Posted 3 weeks ago

Apply

3.0 - 8.0 years

2 - 7 Lacs

ahmedabad

Remote

What You’ll Do: Hands-on with LangChain / Semantic Kernel / CrewAI / AutoGen Experience in LLMs, APIs, Cloud, Vector DBs, RAG Strong Python + AI/ML/NLP background

Posted 3 weeks ago

Apply

15.0 - 20.0 years

2 - 5 Lacs

hyderabad

Work from Office

About The Role Project Role : Quality Engineer (Tester) Project Role Description : Enables full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Performs continuous testing for security, API, and regression suite. Creates automation strategy, automated scripts and supports data and environment configuration. Participates in code reviews, monitors, and reports defects to support continuous improvement activities for the end-to-end testing process. Must have skills : Microsoft Fabric Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Quality Engineer, you will enable full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Your typical day will involve performing continuous testing for security, API, and regression suites, creating automation strategies, and supporting data and environment configurations. You will also participate in code reviews, monitor and report defects, and engage in continuous improvement activities for the end-to-end testing process, ensuring that the highest quality standards are met throughout the project lifecycle. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement comprehensive testing strategies to ensure product quality.- Collaborate with cross-functional teams to identify and resolve defects efficiently. Professional & Technical Skills: - Strong understanding of Power BI:testing Power BI reports and dashboards, including data models, visualizations, and DAX measures. SQL and Data Modelling:Familiarity with SQL for data extraction and manipulation, as well as data modelling concepts. MS Fabric:Some exposure or understanding of Fabric (Semantic model, ETL pipelines, monitoring tools)- Data Analysis & Validation:Ability to analyse data, identify data quality issues, and validate accuracy. Test Planning and Execution:Designing and executing test strategy and plans, - Testing Methodologies:Experience with various testing methodologies, including functional, performance, integration and user acceptance testing. Documentation:Creating and maintaining detailed test documentation, including test plans, test cases, and defect reports.- Communication and Collaboration:Excellent communication and collaboration skills to work effectively with stakeholders and developers. Qualification 15 years full time education

Posted 3 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies