Home
Jobs

3959 Retrieval Jobs - Page 9

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Location: Indore (Work from Office) (Only Indore based candidates apply) Experience: 5+ years in AI/ML, with deep expertise in RAG and real-world deployments Type: Full-time Reports to: Founder / CTO About Us We are a tech company specializing in fan engagement solutions for sports leagues, teams, and iGaming operators. With a strong suite of fantasy games, prediction markets, and interactive platforms, we’ve built engaging digital experiences for millions of sports fans. We are now building the AI layer into our offerings — and we’re looking for the right person to lead this frontier. Role Overview We are hiring a senior AI leader who will own the strategy, research, and implementation of Artificial Intelligence within our products. The initial focus will be on building robust RAG-based systems , integrating internal and external data sources to provide intelligent insights and automation across our gaming and fan engagement platforms. This is a high-impact, hands-on leadership role with the mandate to build and scale our AI capabilities from the ground up. Key Responsibilities Define and lead the AI roadmap, starting with Retrieval-Augmented Generation (RAG) systems for internal tools and user-facing features. Build proof-of-concepts and MVPs that demonstrate the value of AI in fan engagement, fantasy gaming, and prediction systems. Work with structured and unstructured data (game stats, user behavior, content) to train and deploy intelligent agents and recommendation systems. Collaborate closely with product managers, data engineers, and front-end/backend developers to integrate AI solutions into live products. Evaluate and implement open-source and commercial LLMs, vector databases, and toolchains for optimal performance and cost. Hire and mentor a small team of AI/ML engineers as the department scales. What We’re Looking For Proven track record of delivering AI/ML features in production environments (not just research). Deep hands-on experience with RAG pipelines, including vector databases (Pinecone, Weaviate, FAISS), chunking strategies, embedding models, and prompt engineering. Strong understanding of transformer-based LLMs and fine-tuning approaches. Comfortable working with APIs, backend systems, and integrating AI into real-world software products. Bonus: experience in personalization, recommendation systems, or content generation in the sports, media, or gaming domain. Why Join Us Opportunity to build the AI foundation of a sports focused, product-development company. Freedom to experiment and deploy cutting-edge AI in consumer-facing applications with high engagement. Work on unique sports-related problems that combine data, content, and user behavior in creative ways.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a skilled Lead Software Engineer to join our team and lead a project focused on developing GenAI applications using Large Language Models (LLMs) and Python programming . In this role, you will be responsible for designing and optimizing Al-generated text prompts to maximize effectiveness for various applications. You will also collaborate with cross-functional teams to ensure seamless integration of optimized prompts into the overall product or system. Your expertise in prompt engineering principles and techniques will allow you to guide models to desired outcomes and evaluate prompt performance to identify areas for optimization and iteration. Responsibilities Design, develop, test and refine AI-generated text prompts to maximize effectiveness for various applications Ensure seamless integration of optimized prompts into the overall product or system Rigorously evaluate prompt performance using metrics and user feedback Collaborate with cross-functional teams to understand requirements and ensure prompts align with business goals and user needs Document prompt engineering processes and outcomes, educate teams on prompt best practices and keep updated on the latest AI advancements to bring innovative solutions to the project Requirements 7 to 12 years of relevant professional experience Expertise in Python programming including experience with Al/machine learning frameworks like TensorFlow, PyTorch, Keras, Langchain, MLflow, Promtflow 2-5 years of working knowledge of NLP and LLMs like BERT, GPT-3/4, T5, etc. Knowledge of how these models work and how to fine-tune them Expertise in prompt engineering principles and techniques like chain of thought, in-context learning, tree of thought, etc. Knowledge of retrieval augmented generation (RAG) Strong analytical and problem-solving skills with the ability to think critically and troubleshoot issues Excellent communication skills, both verbal and written in English at a B2+ level for collaborating across teams, explaining technical concepts, and documenting work outcomes

Posted 3 days ago

Apply

5.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Key Responsibility Lead the fine-tuning and domain adaptation of open-source LLMs (e.g., LLaMA 3) using frameworks like vLLM, HuggingFace, DeepSpeed, and PEFT techniques. Develop data pipelines to ingest, clean, and structure cybersecurity data, including threat intelligence reports, CVEs, exploits, malware analysis, and configuration files. Collaborate with cybersecurity analysts to build taxonomy and structured knowledge representations to embed into LLMs. Drive the design and execution of evaluation frameworks specific to cybersecurity tasks (e.g., classification, summarization, anomaly detection). Own the lifecycle of model development including training, inference optimization, testing, and deployment. Provide technical leadership and mentorship to a team of ML engineers and researchers. Stay current with advances in LLM architectures, cybersecurity datasets, and AI-based threat detection. Advocate for ethical AI use and model robustness, especially given the sensitive nature of cybersecurity data Requirements Required Skills: 5+ years of experience in machine learning, with at least 2 years focused on LLM training or fine-tuning. Strong experience with vLLM, HuggingFace Transformers, LoRA/QLoRA, and distributed training techniques. Proven experience working with cybersecurity data—ideally including MITRE ATT&CK, CVE/NVD databases, YARA rules, Snort/Suricata rules, STIX/TAXII, or malware datasets. Proficiency in Python, ML libraries (PyTorch, Transformers), and MLOps practices. Familiarity with prompt engineering, RAG (Retrieval-Augmented Generation), and vector stores like FAISS or Weaviate. Demonstrated ability to lead projects and collaborate across interdisciplinary teams. Excellent problem-solving skills and strong written & verbal communication. Nice to Have Experience deploying models via vLLM in production environments with FastAPI or similar APIs. Knowledge of cloud-based ML training (AWS/GCP/Azure) and GPU infrastructure. Background in reverse engineering, malware analysis, red teaming, or threat hunting. Publications, open-source contributions, or technical blogs in the intersection of AI and cybersecurity.

Posted 3 days ago

Apply

8.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 11 The Team Our team is on an exciting journey to build Kensho Spark Assist, S&P Global’s internal conversational AI platform, designed to support colleagues across all departments. We work collaboratively with internal and external partners, using data-driven decisions and continuous improvement to create value. Forward-thinking in nature, we leverage modern generative AI models and cloud services. Our focus is on the creation of scalable systems over customized solutions, all while prioritizing the needs of our stakeholders. What You Stand To Gain Build a rewarding career with a leading global company in an international team. Develop relevant solutions that enhance efficiency and drive innovation across S&P Global's diverse departments. Enhance your skills by engaging with enterprise-level products and cutting-edge genAI technologies. Work alongside experts in AI and technology, gaining insights and experience that will propel your career forward. Responsibilities Develop clean, high-quality Python code that is easy to read and maintain. Solve complex problems by analyzing and isolating issues efficiently. Champion best practices in coding and serve as a subject matter expert. Design and implement solutions to support key business needs. Engineer components and API functions using Python. Produce system design documents and lead technical walkthroughs. Collaborate effectively with both technical and non-technical partners to achieve project goals. Continuously improve the architecture to enhance system performance and scalability. Provide technical guidance and mentorship to team members, fostering a culture of continuous improvement. Basic Qualifications 8+ years of experience in designing and building solutions using distributed computing. Proven experience in implementing and maintaining web applications in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design. Experience with CI/CD pipelines to automate the deployment and testing of software. Proficient programming skills in high-level languages, particularly Python. Solid knowledge of cloud platforms such as Azure and AWS. Experience with SQL and NoSQL such as Azure Cosmos DB and PostgreSQL Ability to quickly define and prototype solutions with continual iteration within challenging timelines. Strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications Generative AI Expertise: Deep understanding of generative AI models, including experience with large language models (LLMs) such as GPT, BERT, and Transformer architectures. Embedding Techniques: Proficiency in creating and utilizing embeddings for various applications, including semantic search and recommendation systems. Machine Learning and NLP: Experience with machine learning models and natural language processing techniques to enhance AI-driven solutions. Vector Search and Retrieval: Familiarity with vector search techniques and embedding models for efficient data retrieval and analysis. Cloud Platforms: Knowledge of cloud services such as AWS, Azure, or Google Cloud for deploying and managing AI solutions. Collaboration and Leadership: Ability to lead, train, and mentor team members effectively. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317163 Posted On: 2025-06-26 Location: Hyderabad, Telangana, India

Posted 3 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Greetings from Synergy Resource Solutions, a leading recruitment consultancy firm Our Client is an ISO 27001:2013 AND ISO 9001 Certified company, and pioneer in web design and development company from India. Company has also been voted as the Top 10 mobile app development companies in India. Company is leading IT Consulting and web solution provider for custom software, website, games, custom web application, enterprise mobility, mobile apps and cloud-based application design & development. Company is ranked one of the fastest growing web design and development company in India, with 3900+ successfully delivered projects across United States, UK, UAE, Canada and other countries. Over 95% of client retention rate demonstrates their level of services and client satisfaction. Position : Senior Database Administrator (WFO) Experience : 5-8 Years relevant experience Education Qualification : Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Job Location : Ahmedabad Shift : 11 AM – 8.30 PM CTC: 18 to 25 Lacs Key Responsibilities: Our client seeking an experienced and motivated Senior Data Engineer to join their AI & Automation team. The ideal candidate will have 5–8 years of experience in data engineering, with a proven track record of designing and implementing scalable data solutions. A strong background in database technologies, data modeling, and data pipeline orchestration is essential. Additionally, hands-on experience with generative AI technologies and their applications in data workflows will set you apart. In this role, you will lead data engineering efforts to enhance automation, drive efficiency, and deliver data driven insights across the organization. Job Description: • Design, build, and maintain scalable, high-performance data pipelines and ETL/ELT processes across diverse database platforms. • Architect and optimize data storage solutions to ensure reliability, security, and scalability. • Leverage generative AI tools and models to enhance data engineering workflows, drive automation, and improve insight generation. • Collaborate with cross-functional teams (Data Scientists, Analysts, and Engineers) to understand and deliver on data requirements. • Develop and enforce data quality standards, governance policies, and monitoring systems to ensure data integrity. • Create and maintain comprehensive documentation for data systems, workflows, and models. • Implement data modeling best practices and optimize data retrieval processes for better performance. • Stay up-to-date with emerging technologies and bring innovative solutions to the team. Qualifications: • Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. • 5–8 years of experience in data engineering, designing and managing large-scale data systems. The mandatory skills are as follows: SQL NoSQL (MongoDB or Cassandra, or CosmosDB) One of the following : Snowflake or Redshift or BigQuery or Microsft Fabrics Azure Strong expertise in database technologies, including: o SQL Databases: PostgreSQL, MySQL, SQL Server o NoSQL Databases: MongoDB, Cassandra o Data Warehouse/ Unified Platforms: Snowflake, Redshift, BigQuery, Microsoft Fabric • Hands-on experience implementing and working with generative AI tools and models in production workflows. • Proficiency in Python and SQL, with experience in data processing frameworks (e.g., Pandas, PySpark). • Experience with ETL tools (e.g., Apache Airflow, MS Fabric, Informatica, Talend) and data pipeline orchestration platforms. • Strong understanding of data architecture, data modeling, and data governance principles. • Experience with cloud platforms (preferably Azure) and associated data services. Skills: • Advanced knowledge of Database Management Systems and ETL/ELT processes. • Expertise in data modeling, data quality, and data governance. • Proficiency in Python programming, version control systems (Git), and data pipeline orchestration tools. • Familiarity with AI/ML technologies and their application in data engineering. • Strong problem-solving and analytical skills, with the ability to troubleshoot complex data issues. • Excellent communication skills, with the ability to explain technical concepts to non-technical stakeholders. • Ability to work independently, lead projects, and mentor junior team members. • Commitment to staying current with emerging technologies, trends, and best practices in the data engineering domain. If your profile is matching with the requirement & if you are interested for this job, please share your updated resume with details of your present salary, expected salary & notice period.

Posted 3 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About the Role We are seeking a highly skilled and innovative AI Agent Developer with expertise in building intelligent agents using Large Language Models (LLMs) and integrating them with automation systems. The ideal candidate will have a deep understanding of prompt engineering, agent orchestration, tool integration, and automation workflows across APIs, web apps, and enterprise tools. Responsibilities Design, develop, and deploy AI agents powered by LLMs (e.g., GPT-4, Claude, Gemini). Integrate AI agents with tools, APIs, databases, and automation frameworks. Develop reusable prompt chains and workflows for common tasks and decision-making processes. Utilize frameworks such as LangChain, AutoGen, CrewAI, or Semantic Kernel to manage multi-agent architectures. Fine-tune or instruct LLMs for specific use-cases or industry applications. Optimize performance, reliability, and cost-efficiency of AI workflows. Collaborate with data scientists, product managers, and engineers to design end-to-end AI solutions. Implement automation in internal tools, customer interactions, or operational pipelines using AI agents. Requirements Must-Have: Strong experience with LLMs such as OpenAI GPT, Anthropic Claude, or Meta Llama. Hands-on experience with agentic frameworks (LangChain, AutoGen, CrewAI, etc.). Proficient in Python and relevant AI libraries (e.g., HuggingFace, Transformers, LangChain). Solid understanding of prompt engineering and retrieval-augmented generation (RAG). Knowledge of automation tools like Zapier, Make, Airflow, or custom Python automation. Experience working with APIs, webhooks, and data integrations. Nice-to-Have: Experience with vector databases (e.g., Pinecone, Weaviate, FAISS). Knowledge of fine-tuning or customizing open-source LLMs. Familiarity with cloud platforms (AWS, GCP, Azure) and deployment of AI solutions. Experience with UI/UX for chatbot or agent interfaces.

Posted 3 days ago

Apply

1.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

Introduction IBM Cognos Analytics is a comprehensive business intelligence platform that transforms raw data into actionable insights through advanced reporting, AI-powered analytics, and interactive visualizations. Designed to cater to organizations of all sizes, it offers high-quality, scalable reporting capabilities, enabling users to create and share customized reports efficiently. The platform's intuitive interface allows for seamless exploration of data, uncovering hidden trends and facilitating informed decision-making without the need for advanced technical skills. With robust governance and security features, IBM Cognos Analytics ensures data integrity and confidentiality, making it a trusted solution for businesses aiming to harness the full potential of their data. Your Role And Responsibilities Develop new features, enhancements, and bug fixes for the Cognos Analytics platform, following best coding practices and design principles. Create and integrate User Interfaces, APIs, services, and data connectors that allow the system to interact with various data sources and third-party applications. Develop and manage database interactions, ensuring optimal performance and data retrieval processes. Implement and follow coding standards, code reviews, and quality control processes to ensure high-quality code. Participate in daily stand-ups, sprint planning, and retrospectives. Follow and contribute to Agile practices. Actively investigate, troubleshoot, and resolve issues or bugs within the application, including those reported by end-users or QA. Identify performance bottlenecks and optimize the performance of Cognos Analytics features. Develop and execute unit/integrations tests to ensure individual components of the system function correctly. Write integration tests for system-wide functionality. Work closely with other teams, such as product management, business analysts, UX/UI designers, DevOps, SRE, and data engineers, to ensure requirements are understood and solutions meet business needs. Preferred Education Master's Degree Required Technical And Professional Expertise Bachelor’s degree in computer science or related field Excellent interpersonal and communication skills with ability to effectively articulate technical challenges and devise solutions 1-5 years of experience in automated quality engineering / software development / test automation. Proficiency in JavaScript, Java, Spring, SQL, RDBMS Proficiency in UI automation with Selenium Proficiency with shell/bash scripting and Linux/Unix command-line interface Proficiency in scripting on Microsoft Visual Studio Code Editor, etc. Experience or willingness to learn testing of cloud-native applications Understanding of agile development, test management, continuous integration, continuous development environment (CICD) with tools such as: GitHub, Jira, Jenkins etc. Other Tools: SSH clients, container technologies (ie: Docker, Podman). Ability to work independently in a large matrix organization. Troubleshoot and solve customer issues on production deployments Preferred Technical And Professional Experience Knowledge of Mobile App Testing and Automation frameworks and tools Knowledge of Performance Testing and Load Testing tools Proficiency in using automated API testing tools. Knowledge of programming using Go, C++, C# etc. Knowledge of software design patterns, microservices. Agile software development methodologies Knowledge of CI/CD, Openshift, Kubernetes etc. Ability to adapt to and learn new technologies

Posted 3 days ago

Apply

1.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

Introduction IBM Cognos Analytics is a comprehensive business intelligence platform that transforms raw data into actionable insights through advanced reporting, AI-powered analytics, and interactive visualizations. Designed to cater to organizations of all sizes, it offers high-quality, scalable reporting capabilities, enabling users to create and share customized reports efficiently. The platform's intuitive interface allows for seamless exploration of data, uncovering hidden trends and facilitating informed decision-making without the need for advanced technical skills. With robust governance and security features, IBM Cognos Analytics ensures data integrity and confidentiality, making it a trusted solution for businesses aiming to harness the full potential of their data. Your Role And Responsibilities Develop new features, enhancements, and bug fixes for the Cognos Analytics platform, following best coding practices and design principles. Create and integrate User Interfaces, APIs, services, and data connectors that allow the system to interact with various data sources and third-party applications. Develop and manage database interactions, ensuring optimal performance and data retrieval processes. Implement and follow coding standards, code reviews, and quality control processes to ensure high-quality code. Participate in daily stand-ups, sprint planning, and retrospectives. Follow and contribute to Agile practices. Actively investigate, troubleshoot, and resolve issues or bugs within the application, including those reported by end-users or QA. Identify performance bottlenecks and optimize the performance of Cognos Analytics features. Develop and execute unit/integrations tests to ensure individual components of the system function correctly. Write integration tests for system-wide functionality. Work closely with other teams, such as product management, business analysts, UX/UI designers, DevOps, SRE, and data engineers, to ensure requirements are understood and solutions meet business needs. Required Technical And Professional Expertise Bachelor’s degree in computer science or related field Excellent interpersonal and communication skills with ability to effectively articulate technical challenges and devise solutions 1-5 years of experience in automated quality engineering / software development / test automation. Proficiency in JavaScript, Java, Spring, SQL, RDBMS Proficiency in UI automation with Selenium Proficiency with shell/bash scripting and Linux/Unix command-line interface Proficiency in scripting on Microsoft Visual Studio Code Editor, etc. Experience or willingness to learn testing of cloud-native applications Understanding of agile development, test management, continuous integration, continuous development environment (CICD) with tools such as: GitHub, Jira, Jenkins etc. Other Tools: SSH clients, container technologies (ie: Docker, Podman). Ability to work independently in a large matrix organization. Troubleshoot and solve customer issues on production deployments Preferred Technical And Professional Experience Knowledge of Mobile App Testing and Automation frameworks and tools Knowledge of Performance Testing and Load Testing tools Proficiency in using automated API testing tools. Knowledge of programming using Go, C++, C# etc. Knowledge of software design patterns, microservices. Agile software development methodologies Knowledge of CI/CD, Openshift, Kubernetes etc. Ability to adapt to and learn new technologies

Posted 3 days ago

Apply

3.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Key Responsibilities: Design, build, and deploy scalable NLP/ML models for real-world applications. Fine-tune and optimize Large Language Models (LLMs) using techniques like LoRA, PEFT, or QLoRA. Work with transformer-based architectures (e.g., BERT, GPT, LLaMA, T5, etc.). Develop GenAI applications using frameworks such as LangChain, Hugging Face, OpenAI API, or RAG (Retrieval-Augmented Generation). Write clean, efficient, and testable Python code. Collaborate with data scientists, software engineers, and stakeholders to define AI- driven solutions. Evaluate model performance and iterate rapidly based on user feedback and metrics. Required Skills s Qualifications: 3+ years of experience in Python programming with strong understanding of ML pipelines. Solid understanding and experience in NLP, including text preprocessing, embeddings, NER, sentiment analysis, etc. Proficiency in ML libraries: scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers, spaCy. Experience with GenAI concepts, including prompt engineering, LLM fine-tuning, and vector databases (e.g., FAISS, ChromaDB). Strong problem-solving and communication skills. Ability learn new tools, to work independently and collaboratively in a fast-paced environment Attention to detail and accuracy Preferred Skills. Theoretical knowledge of or experience in Data Engineering, Data Science, AI, ML, RPA or other related domains. Certification in Business Analysis or Project Management from a recognized institution. Experience in working with agile methodologies such as Scrum or Kanban. Company Profile : Space Inventive is an innovative and dynamic company that specializes in leading businesses through transformative journeys in the digital era. They are pioneers in driving innovation and helping organizations transition into digitally mature entities. With a wide range of cutting-edge services, including web enterprise application development, AI & ML development, cloud engineering, data engineering, and business intelligence, Space craft's tailor-made solutions to meet each client's unique challenges. Their integrated approach combines strategic vision with digital expertise, empowering businesses to create new models, modernize legacy systems, and launch market-ready digital products and platforms. Having served esteemed clients like Novartis, BMS, StarRez, among others, Space has a proven track record of success.

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.

Posted 3 days ago

Apply

6.0 - 9.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

We are seeking a talented individual to join our Data Engineering team at Marsh Mc Lennan. This role will be based in Mumbai/Pune/Gurgaon. This is a hybrid role that has a requirement of working at least three days a week in the office. We will count on you to: Solution Architecture: Lead the design and architecture of data engineering solutions that meet complex business requirements, ensuring scalability, reliability, and performance. Data Pipeline Development: Oversee the development and maintenance of robust data pipelines and architectures to facilitate data ingestion, transformation, and storage from various sources. Cloud Technologies Expertise: Utilize cloud data engineering tools such as Azure Data Factory, Databricks, or Amazon data engineering tools to implement and optimize data solutions. Data Integration and Management: Integrate and manage data from diverse sources, ensuring seamless data flow and accessibility for analytics and reporting purposes. Data Quality Assurance: Establish and enforce data quality standards and validation processes to ensure the accuracy, consistency, and reliability of data across the organization. Performance Optimization: Monitor, troubleshoot, and optimize data pipelines for performance, scalability, and cost-effectiveness, making adjustments as necessary to improve efficiency. Collaboration and Leadership: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver effective solutions. Provide mentorship and guidance to junior team members. Documentation and Best Practices: Create and maintain comprehensive documentation for data architectures, pipelines, and processes, promoting best practices and knowledge sharing within the team. Technical Skills: Utilize SQL for data manipulation and retrieval, and apply programming skills in languages such as Python or Scala for data processing tasks. Continuous Improvement: Stay abreast of industry trends and emerging technologies in data engineering, proactively seeking opportunities to enhance existing processes, tools, and methodologies. What you need to have: Bachelor’s or master’s degree in computer science, Information Technology, Data Engineering, or a related field. 6-9 years of experience in data engineering, Database, ETL or data management related role, with a focus on solutioning and architecture. Proven expertise in cloud data engineering tools such as Azure Data Factory, Databricks, or AWS data engineering tools. Strong proficiency in SQL, Python and experience with ETL processes and tools. Familiarity with data warehousing concepts and technologies, as well as big data frameworks. Proficiency in extracting data from multiple data sources – Web, PDF, Excel or any database with broad working knowledge of methodologies used for analytics is required. What makes you stand out? Degree or Certification in Data Engineering (AWS, Databricks) (would be preferred) Experience in Healthcare/Insurance sector, working with Multination clients. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman . With annual revenue of $23 billion and more than 85,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com , or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person.

Posted 3 days ago

Apply

200.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description You are a strategic thinker passionate about driving solution. You have found the right team. As an Associate within the VCG team, your primary responsibility will be to work on automation and redesign of existing implementations using Python. Alteryx skills are considered a plus. Job Responsibilities Automate Excel tasks by developing Python scripts with openpyxl, pandas, and xlrd, focusing on data extraction, transformation, and generating reports with charts and pivot tables. Design and deploy interactive web applications using Streamlit, enabling real-time data interaction and integrating advanced analytics. Use Matplotlib and Seaborn to create charts and graphs, adding interactive features for dynamic data exploration tailored to specific business needs. Design intuitive user interfaces with PyQt or Flask, integrating data visualizations and ensuring secure access through authentication mechanisms. Perform data manipulation and exploratory analysis using Pandas and NumPy, and develop data pipelines to maintain data quality and support analytics. Write scripts to connect to external APIs, process data in JSON and XML formats, and ensure reliable data retrieval with robust error handling. Collaborate with cross-functional teams to gather requirements, provide technical guidance, and ensure alignment on project goals, fostering open communication. Demonstrate excellent problem-solving skills and the ability to troubleshoot and resolve technical issues. Adhere to the control, governance, and development standards for intelligent solutions. Strong communication skills and the ability to work collaboratively with different teams. Required Qualifications, Capabilities, And Skills Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in Python programming and automation. Experience with Python libraries such as Pandas, NumPy, PyQt, Streamlit, Matplotlib, Seaborn, openpyxl, xlrd, Flask, PyPDF2, pdfplumber and SQLite . Analytical, quantitative aptitude, and attention to detail. Strong verbal and written communication skills. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Global Finance & Business Management works to strategically manage capital, drive growth and efficiencies, maintain financial reporting and proactively manage risk. By providing information, analysis and recommendations to improve results and drive decisions, teams ensure the company can navigate all types of market conditions while protecting our fortress balance sheet.

Posted 3 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Minimum qualifications: Bachelor’s degree or equivalent practical experience. 2 years of experience working with operating systems, computer architecture, embedded systems and Linux/Unix kernel, etc. 2 years of experience with software development in C or C++ programming languages. 2 years of experience with data structures or algorithms. Experience with the Android platform. Preferred qualifications: 4 years of experience working in embedded systems and Linux/Unix kernel. Experience with System Software in any of the following areas: ARM/ARM64 architecture, compilers, firmware, Operating systems, Linux kernel, filesystems/storage, device drivers, performance tuning, networking, tools, tests, virtualization, platform libraries, etc. Experience developing and designing large software systems. About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. As a member of the Android Systems team, you will work on the foundations of Android operating system and collaborate with Android teams in Google. You will contribute to the core of Android and work on variety of open source projects including the Linux kernel, Android OS, and build the future of Android together with our large partner ecosystem. You will work on areas such as storage, filesystems, low-level performance, and systems software. You will be contributing to Android's updatability, security and quality while working alongside leading domain experts from various areas. Android is Google’s open-source mobile operating system powering more than 3 billion devices worldwide. Android is about bringing computing to everyone in the world. We believe computing is a super power for good, enabling access to information, economic opportunity, productivity, connectivity between friends and family and more. We think everyone in the world should have access to the best computing has to offer. We provide the platform for original equipment manufacturers (OEMs) and developers to build compelling computing devices (smartphones, tablets, TVs, wearables, etc) that run the best apps/services for everyone in the world. Responsibilities Design, develop and deploy features for billions of users. Work on core system components including storage, filesystems, updatability, and virtualization. Create and ship Generic Kernel Image (GKI) for next generation devices. Scale development across a growing number of verticals (Wear, Auto, TV, large screen, etc.). Create and maintain a reliable and secure foundation for the Android software ecosystem. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 3 days ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Job Title : Content Manager Office Location : Chhatarpur, New Delhi – 110074 Nature of work : Full-Time Permanent Role Experience : Basic camera handling and video editing skills. Salary / Compensation: Up to ₹20,000 per month (in-hand) Send your resume via WhatsApp @ 8076961363 or email it to Hr@ilahitravels.com . About the Role: We are seeking a detail-oriented and organized Content Manager to handle, sort, and manage our growing library of company shoot data. The ideal candidate will be responsible for ensuring that all visual and written content is properly categorized and easily accessible. You will also work closely with the sales and marketing teams to ensure they have timely access to relevant content for campaigns and client interactions. Key Responsibilities: Manage and organize all company shoot data (photos, videos, etc.) for easy retrieval and use. Filter and sort raw footage and content assets from various departments. Coordinate with the sales and marketing teams to share relevant content as per requirements. Maintain an updated database of visual assets for ongoing and future projects. Support the content team in repurposing and structuring old content for new formats. Ensure all content aligns with brand guidelines and is accessible on shared drives or content tools. Assist in basic editing or tagging of content, if required. Regularly audit and clean up content libraries for duplication and outdated material. Ideal Candidate: Strong organizational and coordination skills. Basic knowledge of camera handling and video editing is a must. Familiarity with content formats (images, videos, etc.). Comfortable using Google Drive or other file management systems. Excellent communication and collaboration abilities. Ability to work independently and manage multiple content requests at once.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Company Description Elhaa Technologies Pvt Ltd is a US and Asia-Pacific based technology company dedicated to transforming dreams into reality. We collaborate with the world's leading companies to build stronger businesses, guiding them from merely doing digital to truly being digital. 🚀 We're Hiring | Generative AI Specialist – RAG Systems 📍 Location : Hybrid | 🕒 Experience : 4+ Years | 💼 Full-time Are you passionate about Generative AI and how it’s transforming industries? We’re looking for someone with end-to-end project experience in building AI solutions — particularly using RAG (Retrieval-Augmented Generation) systems. What You’ll Work On: 🔹 Architecting and implementing RAG pipelines 🔹 Deep understanding of how RAG works behind the scenes 🔹 Working with Vector Databases (Pinecone, FAISS, Weaviate, etc.) 🔹 Prompt engineering and optimization 🔹 Fine-tuning LLMs for domain-specific tasks 🔹 Leveraging semantic search to improve response relevance 🔹 End-to-end deployment and performance tuning of GenAI apps Must-Have Skills: ✔ Strong grasp of Generative AI models & architecture ✔ Hands-on experience with Vector DBs and embedding techniques ✔ Knowledge of RAG components and how they integrate ✔ Experience in building and deploying full-stack AI solutions 🎯 If you’re excited to push the boundaries of AI and love working in fast-paced, innovation-driven environments, let’s connect! 📩 Apply now or DM me directly for more info. #Hiring #GenerativeAI #RAG #PromptEngineering #LLM #VectorDatabase #AIJobs #MachineLearning #SemanticSearch #AIEngineer #TechHiring #OpenToWork #AIdeveloper #NowHiring #Careers #JobAlert #JobOpening

Posted 3 days ago

Apply

2.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Computer Vision Engineer Location: Coimbatore, Work from office role Experience Required: 2+ years Employment Type: Full-Time, Permanent Company: Katomaran Technologies About Us Katomaran Technologies is a cutting-edge technology company building real-world AI applications that span across computer vision, large language models (LLMs), and AI agentic systems. We are looking for a highly motivated Senior AI Developer who thrives at the intersection of technology, leadership, and innovation. You will play a core role in architecting AI products, leading engineering teams, and collaborating directly with the founding team and customers to turn vision into scalable, production-ready solutions. Key Responsilbilities Architect and develop scalable AI solutions using computer vision, LLMs, and agent-based AI architectures. Collaborate with the founding team to define product roadmaps and AI strategy. Lead and mentor a team of AI and software engineers, ensuring high code quality and project delivery timelines. Develop robust, efficient pipelines for model training, validation, deployment, and real-time inference. Work closely with customers and internal stakeholders to translate requirements into AI-powered applications. Stay up to date with state-of-the-art research in vision models (YOLO, SAM, CLIP, etc.), transformers, and agentic systems (AutoGPT-style orchestration). Optimize AI models for deployment on cloud and edge environments. Required Skills and Qualifications Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related fields. 2+ years of hands-on experience building AI applications in computer vision and/or NLP. Strong knowledge of Deep Learning frameworks (PyTorch, TensorFlow, OpenCV, HuggingFace, etc.). Proven experience with LLM fine-tuning, prompt engineering, and embedding-based retrieval (RAG). Solid understanding of agentic systems such as LangGraph, CrewAI, AutoGen, or custom orchestrators. Ability to design and manage production-grade AI systems (Docker, REST APIs, GPU optimization, etc.). Strong communication and leadership skills, with experience managing small to mid-size teams. Startup mindset – self-driven, ownership-oriented, and comfortable in ambiguity. Nice to have Experience with video analytics platforms or edge deployment (Jetson, Coral, etc.). Experience with programming skills in C++ will be an added advantage Knowledge of MLOps practices and tools (MLflow, Weights & Biases, ClearML, etc.). Exposure to Reinforcement Learning or multi-agent collaboration models. Customer-facing experience or involvement in AI product strategy. What we offer Medical insurance Paid sick time Paid time off PF To Apply: Send your resume, GitHub/portfolio, and a brief note about your most exciting AI project to hr@katomaran.com.

Posted 3 days ago

Apply

2.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

AI/ML ENGINEER Who We Are? Cleantech Industry Resources accelerates United States solar, battery storage and EV projects by providing turnkey development as a service including 100% internal systems engineering. The company deploys a leading team that spun out of the largest solar power producer in the world. This team operates within a sophisticated suite of software to support projects from land origination, through to commercial operation. Location Chennai What We Offer Opportunity to join a top-notch, collaborative team of professionals Fantastic team environment and collaborative culture Professional development opportunities to grow into an industry leader Medical Insurance for the employee and family Spot Recognition bonus for exceptional performance Long Term Incentive policy Regular team outings, events, and activities to foster a positive work environment Our Commitment to Diversity At CIR, we are dedicated to nurturing a diverse and equitable workforce that truly reflects our community. We deeply value each person’s unique perspective, skills, and experiences. CIR embraces all individuals, regardless of race, religion, sexual orientation, gender identity, age, or nationality. We are steadfast in our commitment to fostering a just and inclusive world through intentional policies and actions. Your individuality enriches our collective strength, and we strive to ensure everyone feels respected, valued, and empowered. Position Summary We are looking for an AI/ML Engineer to build and optimize machine learning models for GIS-based spatial analysis and data-driven decision-making. This role involves working on geospatial AI models, data pipelines, and Retrieval-Augmented Generation (RAG)-based applications for zoning, county sentiment analysis, and regulatory insights. The engineer will also work closely with the data team, leading efforts in data curation and building robust data pipelines to collect, preprocess, and analyse extensive datasets from various geospatial and regulatory sources to generate automated reports and insights. Core Responsibilities Machine Learning for GIS & Spatial Analysis: Develop and deploy ML models for geospatial data processing, forecasting, and automated GIS insights. Work with large-scale geospatial datasets (e.g., satellite imagery, shapefiles, raster/vector data). Create AI models for land classification, feature detection, and geospatial pattern analysis. Optimize spatial data pipelines and build predictive models for environmental and energy sector applications. Retrieval-Augmented Generation (RAG) & NLP Development: Develop RAG-based AI applications to extract insights from zoning, permitting, and regulatory documents. Build LLM-based applications for zoning law interpretation, county sentiment analysis, and compliance predictions. Implement document retrieval and summarization techniques for legal, policy, and energy development reports. Data Engineering & Pipeline Development: Lead the creation of ETL pipelines to collect and preprocess geospatial data for ML model training. Work with PostGIS, PostgreSQL, and cloud storage to manage structured and unstructured data. Collaborate with the data team to design and implement efficient data processing and storage solutions. AI Model Optimization & Deployment: Fine-tune LLMs for domain-specific applications in renewable energy and urban planning. Deploy AI models using cloud-based MLOps frameworks (AWS, GCP, Azure). Optimize ML model inference for real-time GIS applications and geospatial data analysis. Collaboration & Continuous Improvement: Work with cross-functional teams to ensure seamless AI integration with existing business processes. Engage in knowledge sharing and mentoring within the company. Stay updated with latest advancements in AI, GIS, and NLP to improve existing models and solutions. Education Requirements Master’s in Computer Science, Data Science, Machine Learning, Geostatistics, or related fields. Technical Skills and Experience Software Proficiency: Programming: Python (TensorFlow, PyTorch, scikit-learn, pandas, NumPy), SQL. Machine Learning & AI: Deep learning, NLP, retrieval-based AI, geospatial AI, predictive modeling. GIS & Spatial Data Processing: Experience with PostGIS, GDAL, GeoPandas, QGIS, Google Earth Engine. LLM & RAG Development: Experience in fine-tuning LLMs, retrieval models, vector databases (FAISS, Weaviate). Cloud & MLOps: AWS/GCP/Azure, Docker, Kubernetes, MLflow, FastAPI. Big Data Processing: Experience with large-scale data mining, data annotation, and knowledge graph techniques. Database & Storage: PostgreSQL, NoSQL, vector databases, cloud storage solutions. Communication: Strong ability to explain complex AI/ML concepts to non-technical stakeholders. Project Management: Design experience in projects from conception to implementation. Ability to coordinate with other engineers and stakeholders. Renewable Energy Systems: Understanding of solar energy systems and their integration into existing infrastructure Experience 2-4 years of experience Experience in developing AI for energy sector, urban planning, or environmental analysis. Strong understanding of potential prediction, zoning laws, and regulatory compliance AI applications. Familiarity with spatiotemporal ML models and satellite-based geospatial analytics. Psychosocial Skills /Human Skills/Behavioural Skills Strong analytical, organizational, and problem-solving skills. Management experience a plus. Must be a go-getter with an enterprising attitude A self-starter, able to demonstrate high levels of initiative and motivation Entrepreneurial mindset with the ability to take ideas and run with them from concept to conclusion. Technical understanding of clean energy business processes Exceptional verbal and writing communication skills with superiors, peers, partners, and other stakeholders. Excellent interpersonal skills while managing multiple priorities in a fast-paced and ever-changing environment. Physical Demands The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. The physical demands of this job require an individual to be able to work at a computer for most of the day, be able to participate in conference calls and travel to team retreats on a time-to-time basis. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Work Conditions The work environment is usually quiet (normal city traffic noises are common), a blend of artificial and natural light, temperate and generally supports a collaborative work environment. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Equal Opportunity Employer At Cleantech Industry Resources, we embrace diversity and uphold a strong dedication to establishing an all-encompassing atmosphere for both our staff and associates. Our choices in employment are free from any bias related to race, creed, nationality, ethnicity, gender, sexual orientation, gender identity, gender expression, age, physical limitations, veteran status, or any other legally safeguarded attributes. Being an integral part of Cleantech Industry Resources means you can expect to be immersed in a realm of professional possibilities within a culture that nurtures teamwork, adaptability, and the embracing of all.

Posted 3 days ago

Apply

0 years

0 Lacs

Nashik, Maharashtra, India

On-site

Linkedin logo

Job Summary The Admin Executive will be responsible for providing comprehensive administrative support to the entire office, managing day-to-day operations, coordinating office resources, and ensuring a productive work environment. This role requires excellent organizational skills, meticulous attention to detail, and the ability to handle multiple tasks efficiently while maintaining a professional demeanor. Responsibilities Office Management: Oversee daily office operations, ensuring a clean, organized, and functional work environment. Manage office supplies inventory , place orders, and ensure timely replenishment. Coordinate with vendors for office maintenance, repairs, and other services (e.g., cleaning, utilities, internet). Handle incoming and outgoing mail, couriers, and deliveries. Administrative Support: Provide administrative support to various departments and staff members as needed. Assist in preparing and formatting documents, presentations, reports, and correspondence. Maintain and update physical and electronic filing systems, ensuring confidentiality and easy retrieval of documents. Manage and organize appointments and calendars for senior staff or meeting rooms as required. Front Desk Communication: Act as the primary point of contact for visitors, clients, and vendors, providing a warm and professional welcome. Answer, screen, and direct incoming phone calls with a polite and efficient manner. Handle general email inquiries and forward them to the appropriate person. Record Keeping Data Entry: Accurately input and update data into various systems or databases. Maintain employee records, attendance, and leave management (basic support). Event Meeting Coordination: Assist in organizing and coordinating internal meetings, workshops, and company events, including venue booking, setup, and catering arrangements. Prepare basic meeting agendas and take minutes if required. Travel Coordination (if applicable): Assist with basic travel arrangements, such as booking local transportation or making initial inquiries for flights and accommodations. Adherence to Policies: Ensure compliance with company administrative policies and procedures. This job is provided by Shine.com

Posted 3 days ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. NET Full Stack with AI Experience- 5+years Job Locations- Kochi, Coimbatore, Chennai, Hyderabad, Mumbai, Pune Job Overview We are looking for a versatile Full Stack Developer with solid experience in .NET and Angular, who is also familiar with modern AI technologies, especially Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs). You will be part of a cross functional team building intelligent, user-centric applications that integrate traditional development frameworks with cutting-edge AI/ML capabilities. Key Responsibilities Design, develop, and maintain scalable web applications using .NET (C#) and Angular. Collaborate with AI/ML teams to integrate RAG pipelines and LLM APIs (e.g., OpenAI, Azure OpenAI, Hugging Face) into applications. Build APIs and backend services that interface with vector databases, document stores, and language models. Develop secure, efficient, and modular code adhering to best practices and coding standards. Translate business requirements into functional components with intelligent AI-driven enhancements. Monitor application performance and troubleshoot production issues. Work with DevOps to automate and deploy AI-infused applications using CI/CD pipelines. Required Qualifications 5–10 years of full stack development experience with strong skills in: o .NET Core / ASP.NET o Angular Solid understanding of RESTful APIs, microservices, and modern web architecture. Experience with relational databases (SQL Server, PostgreSQL) and ORMs like Entity Framework. Experience integrating LLMs (OpenAI, Azure OpenAI, or other APIs) into applications. Familiarity with RAG architecture, including: o Document embedding o Vectorsearch (e.g., using FAISS, Pinecone, or Azure Cognitive Search) o Prompt engineering for LLMs Good To Have Knowledge of LangChain, Semantic Kernel, or similar frameworks. Experience working with Azure, AWS, or GCP AIservices. Familiarity with containerization (Docker/Kubernetes). Exposure to CI/CD tools and automated deployment pipelines. Microsoft Certified: Azure AI Engineer Associate (AI-102) Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Passion for staying current with emerging AI and software development trends. What We Offer Competitive salary and benefits. Flexible remote work options. Opportunity to work on cutting-edge AI-powered applications. Inclusive and collaborative team culture. Continuous learning and growth opportunities. Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.

Posted 3 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Gen AI Engineer – GenAI / ML (Python, Langchain) Full Time Location:Chennai / Pune / Bangalore / Noida / Gurgaon Overall Experience: 5–8 Years Focus : Hands-on engineering role focused on designing, building, and deploying Generative AI and LLM-based solutions. The role requires deep technical proficiency in Python and modern LLM frameworks with the ability to contribute to roadmap development and cross-functional collaboration. Key Responsibilities: Design and develop GenAI/LLM-based systems using tools such as Langchain and Retrieval-Augmented Generation (RAG) pipelines. Implement prompt engineering techniques and agent-based frameworks to deliver intelligent, context-aware solutions. Collaborate with the engineering team to shape and drive the technical roadmap for LLM initiatives. Translate business needs into scalable, production-ready AI solutions. Work closely with business SMEs and data teams to ensure alignment of AI models with real-world use cases. Contribute to architecture discussions, code reviews, and performance optimization. Skills Required: Proficient in Python, Langchain, and SQL. Understanding of LLM internals, including prompt tuning, embeddings, vector databases, and agent workflows. Background in machine learning or software engineering with a focus on system-level thinking. Experience working with cloud platforms like AWS, Azure, or GCP. Ability to work independently while collaborating effectively across teams. Excellent communication and stakeholder management skills. Preferred Qualifications: 1+ years of hands-on experience in LLMs and Generative AI techniques. Experience contributing to ML/AI product pipelines or end-to-end deployments. Familiarity with MLOps and scalable deployment patterns for AI models. Prior exposure to client-facing projects or cross-functional AI teams.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Hello Connections, Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Job Title: Mainframe cobol developer · Location: bangalore,chennai,hyderabad,noida,coimbatore · Experience: 5+ Year to 9year(relevant in mainframe cobol developer 5Year) · Job Type : Contract to hire. Work Mode : Hybrid(3day) Office Timing : 1.00PM to 10.30PM · Notice Period:- Immediate joiners(who can able to join july 3rd week) Mandatory Skills: Mainframe Cobol Developer, JCL, DB2, VSAM, CICS (relevant in mainframe cobol developer 6 Year) Need to work from 1.00 PM to 10.30 PM IST Agile Software Development (typically Scrum, Kanban, Safe) Banking Domain should be atleast 4 years of experience L1 Virtual interview - 28th June(saturday) Roles and Responsibilities: Responsibilities: COBOL Programming: Develop, maintain, and enhance COBOL applications that meet business requirements. Write, test, and debug COBOL code for high-performance batch and online processing. Modify and update existing COBOL applications to improve efficiency or add new features. JCL (Job Control Language): Create and maintain JCL scripts to manage batch jobs for data processing. Ensure that JCL is optimized for job scheduling, monitoring, and error handling. Troubleshoot and resolve JCL-related issues that impact batch processing. DB2 (Database): Design and develop DB2 queries to interact with databases, ensuring optimal performance. Integrate COBOL programs with DB2 for data retrieval, insertion, and updating. Ensure database integrity and handle SQL optimization for large-scale banking transactions. VSAM (Virtual Storage Access Method): Work with VSAM files to store and retrieve data efficiently. Ensure that COBOL programs interact seamlessly with VSAM files. Perform file management tasks such as creating, deleting, and maintaining VSAM datasets. CICS (Customer Information Control System): Develop and maintain CICS-based applications, ensuring seamless communication between online programs and data resources. Optimize transaction processing in a CICS environment, focusing on real-time banking applications. Debug and resolve any issues related to CICS transactions, ensuring minimal downtime. Agile Methodology: Participate in Agile ceremonies, including daily standups, sprint planning, and retrospectives. Collaborate with cross-functional teams to deliver features incrementally and meet sprint goals. Ensure timely delivery of COBOL-based solutions within Agile sprints. Banking Domain Knowledge: Develop software that aligns with banking regulations, business processes, and security standards. Ensure data accuracy and consistency in financial transactions, account management, and payment systems. Stay informed about changes in the banking domain and ensure the software complies with industry standards and regulations. Testing and Documentation: Write unit tests and perform thorough testing of COBOL programs, ensuring high-quality output. Document code, workflows, and processes for future reference and regulatory purposes. Provide clear documentation for troubleshooting, maintenance, and knowledge sharing. Performance Optimization: Analyze the performance of COBOL applications and optimize them for speed and efficiency, particularly for high-volume banking transactions. Identify and resolve bottlenecks in the system. Collaboration and Communication: Work closely with business analysts, project managers, and other developers to understand business needs and translate them into technical solutions. Communicate effectively with stakeholders to manage expectations and provide updates on project progress.

Posted 3 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

🚀 We're Hiring: AI Engineers – Agentic AI + RAG Expertise 📍 Location: Chennai 📅 Experience: 8+ Years Are you passionate about building the future of AI? We’re looking for seasoned AI Engineers with hands-on experience in Agentic AI architectures and Retrieval-Augmented Generation (RAG) to join our growing team in Chennai . 🔍 What We’re Looking For: 8+ years of experience in AI/ML development Deep expertise in Agentic AI systems and frameworks Proven experience with RAG pipelines and LLM integration Strong programming skills in Python and familiarity with modern AI toolkits A collaborative mindset and a drive to innovate 🌟 Why Join Us? Work on cutting-edge AI solutions with real-world impact Collaborate with a passionate and forward-thinking team Competitive compensation and growth opportunities If you're ready to shape the next generation of intelligent systems, we want to hear from you! 📩 Apply now or refer someone amazing! - Jeevanrajn@ami.com #Hiring #AIJobs #AgenticAI #RAG #MachineLearning #ChennaiJobs #AIEngineering #TechCareers

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Exp – 15 to 23yrs Location : Chennai /Bangalore Primary skill :- GEN AI Architect, Building GEN AI solutions, Coding, AI ML background, Data engineering, Azure or AWS cloud Job Description : The Generative Solutions Architect will be responsible for designing and implementing cutting-edge generative AI models and systems. He / She will collaborate with data scientists, engineers, product managers, and other stakeholders to develop innovative AI solutions for various applications including natural language processing (NLP), computer vision, and multimodal learning. This role requires a deep understanding of AI/ML theory, architecture design, and hands-on expertise with the latest generative models. Key Responsibilities : GenAI application conceptualization and design: Understand the use cases under consideration, conceptualization of the application flow, understanding the constraints and designing accordingly to get the most optimized results. Deep knowledge to work on developing and implementing applications using Retrieval-Augmented Generation (RAG)-based models, which combine the power of large language models (LLMs) with information retrieval techniques. Prompt Engineering: Be adept at prompt engineering and its various nuances like one-shot, few shot, chain of thoughts etc and have hands on knowledge of implementing agentic workflow and be aware of agentic AI concepts NLP and Language Model Integration - Apply advanced NLP techniques to preprocess, analyze, and extract meaningful information from large textual datasets. Integrate and leverage large language models such as LLaMA2/3, Mistral or similar offline LLM models to address project-specific goals. Small LLMs / Tiny LLMs: Familiarity and understanding of usage of SLMs / Tiny LLMs like phi3, OpenELM etc and their performance characteristics and usage requirements and nuances of how they can be consumed by use case applications. Collaboration with Interdisciplinary Teams - Collaborate with cross-functional teams, including linguists, developers, and subject matter experts, to ensure seamless integration of language models into the project workflow. Text / Code Generation and Creative Applications - Explore creative applications of large language models, including text / code generation, summarization, and context-aware responses. Skills & Tools Programming Languages - Proficiency in Python for data analysis, statistical modeling, and machine learning. Machine Learning Libraries - Hands-on experience with machine learning libraries such as scikit-learn, Huggingface, TensorFlow, and PyTorch. Statistical Analysis - Strong understanding of statistical techniques and their application in data analysis. Data Manipulation and Analysis - Expertise in data manipulation and analysis using Pandas and NumPy. Database Technologies - Familiarity with vector databases like ChromaDB, Pinecone etc, SQL and Non-SQL databases and experience working with relational and non-relational databases. Data Visualization Tools - Proficient in data visualization tools such as Tableau, Matplotlib, or Seaborn. Familiarity with cloud platforms (AWS, Google Cloud, Azure) for model deployment and scaling. Communication Skills - Excellent communication skills with the ability to convey technical concepts to non-technical audiences.

Posted 3 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Gen AI Engineer – GenAI / ML (Python, Langchain) Location: Gurgaon (Onsite) Experience: 7–12 Years Role Overview: We are hiring a skilled Gen AI Engineer to join our team in developing and deploying cutting-edge LLM-based solutions . The ideal candidate will have a strong foundation in Python , hands-on experience with Langchain , and a deep understanding of Retrieval-Augmented Generation (RAG) pipelines. This role involves contributing to both technical implementation and strategic roadmap development, working closely with engineering, data, and business teams to deliver real-world GenAI applications. Key Responsibilities: Design and develop LLM/GenAI systems using tools like Langchain and RAG pipelines . Implement advanced prompt engineering and agent-based workflows for intelligent, adaptive solutions. Contribute to the design and development of the technical roadmap for LLM initiatives. Translate business problems into production-ready AI/ML solutions . Collaborate with business SMEs and data teams to align AI systems with practical use cases. Participate in architecture discussions , code reviews , and performance tuning . Core Skill Set: Strong proficiency in Python , Langchain , and SQL Solid understanding of LLM internals , including prompt tuning , embeddings , vector databases , and agent-based design Background in machine learning or software engineering with system-level thinking Experience with cloud platforms such as AWS , Azure , or GCP Ability to work independently and collaborate across teams effectively Excellent communication and stakeholder management skills Preferred Qualifications: Minimum 1 year of hands-on experience with LLMs and Generative AI techniques Proven experience in end-to-end ML/AI deployments or contributing to AI product pipelines Familiarity with MLOps practices and scalable AI system deployments Prior involvement in client-facing roles or cross-functional AI teams

Posted 3 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Primary skill :- NVIDIA Solution Architect, GEN / AI Architect, Azure or AWS cloud. Relevant Exp :- NVIDIA ( 2 to 3 yrs ) Location :- Chennai / Noida. As an NVIDIA Generative AI Solution Architect at , you will lead the design, development, and deployment of AI solutions leveraging NVIDIA’s Edge AI, Computer Vision, Generative AI, and Metropolis technologies . You will collaborate with cross-functional teams and customers to architect scalable, high-performance AI systems integrating real-time computer vision, generative AI workflows, and industrial digital twins on edge, cloud, and metaverse platforms. Key Responsibilities Architect and deliver end-to-end AI solutions using NVIDIA’s AI Enterprise software, NeMo framework, Triton Inference Server, and GPU-accelerated platforms. Design and implement AI pipelines optimized for edge devices (NVIDIA Jetson, Clara), cloud infrastructure (AWS, Azure, GCP), and data centers (NVIDIA DGX). Develop and showcase proof-of-concept solutions using large language models (LLMs), retrieval-augmented generation (RAG), and advanced computer vision models for object detection, segmentation, and video analytics. Utilize NVIDIA Metropolis platform capabilities to architect AI-powered video analytics and smart city solutions, leveraging edge-to-cloud pipelines for real-time insights and automation. Optimize AI inference workloads using CUDA, TensorRT, mixed precision, and model quantization to meet stringent latency and throughput SLAs. Collaborate with company engineering, product, and client teams to embed NVIDIA AI technologies into enterprise workflows and industrial applications. Provide technical leadership, training, and mentorship on NVIDIA SDKs, AI best practices, and solution deployment strategies. Stay abreast of NVIDIA’s product roadmap, AI research trends, and industrial AI innovations to drive continuous solution improvement. Support customer engagements including technical workshops, solution demonstrations, and architectural reviews. Ensure adherence to data privacy, security, and ethical AI standards throughout the solution lifecycle. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related technical field. 5+ years of experience architecting and deploying AI/ML solutions with strong expertise in NVIDIA AI platforms (NeMo, Triton, CUDA, TensorRT). Proven experience with generative AI technologies including large language models, prompt engineering, and RAG workflows. Strong background in computer vision applications, including object detection, segmentation, and video analytics frameworks. Hands-on experience deploying AI solutions on edge devices (NVIDIA Jetson, Clara), cloud platforms (Azure, AWS, GCP), and data center GPU infrastructure. Familiarity with NVIDIA Metropolis platform for AI-powered video analytics and smart infrastructure solutions. Proficiency in Python, C++, and deep learning frameworks such as PyTorch or TensorFlow. Experience with container orchestration (Kubernetes, Docker) and MLOps practices including CI/CD pipelines for AI workloads. Excellent communication skills for engaging technical teams and business stakeholders. Willingness to travel up to 15% for client and NVIDIA events. Preferred Skills Experience optimizing AI inference with TensorRT, mixed precision, and model quantization. Knowledge of AI ethics, bias mitigation, and responsible AI principles. Prior experience in industrial, manufacturing, smart cities, or healthcare domains. Certifications related to NVIDIA AI technologies or cloud platforms (AWS, Azure, GCP). Experience working in global, cross-cultural teams.

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies