Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
8 - 13 Lacs
Mumbai
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Understand client business requirement and provide suitable approach to solve the business problem in different domains. Develop strategies to solve problems in logical yet creative ways Helping & Managing teams of data scientist and produce project deliverables Processing, cleansing and verifying the integrity of data used for analysis. Data mining and performing EDAs using state-of-the-art methods. Selecting features, building and optimizing classifiers / regressors, etc. using machine learning & deep learning techniques. Help enhance data collection procedures to include information that is relevant for building analytical systems Doing ad-hoc analysis and presenting results in a clear manner. Follow and help improve with Analytics-Delivery procedures using detailed documentations. Create custom reports and presentations accompanied by strong data visualization and storytelling Present analytical conclusions to senior officials in a company and other stakeholders Your Profile Education Engineering graduates (B.E/ B.Tech)/ Other Graduates (Mathematics/ Statistics/ Operational Research/ Computer Science & Applications). Minimum 4 years relevant experience Experience in data management and visualization task Experience with common data science toolkits, such as NumPy, Pandas, StatModel, Scikit learn, SciPy, NLTK, Spacy, Gensim, OpenCV, MLFlow, Tensorflow, Pytorch etc. Excellent understanding / hand-on experience in NLP and computer vision. Good understanding / hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc.). Knowledge/ Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost, H2O etc. Knowledge/ hand-on experience Exposure on large scale dataset and respective tool & framework like Hadoop, Pyspark, etc Knowledge/ hand-on experience on deployment of model in production & deployment. Hands on experience in GenAI. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 weeks ago
3.0 - 7.0 years
3 - 8 Lacs
Chennai
Work from Office
Job Title: Senior Programmer- AI & Data Engineering Location: Work from Office Experience Required: 3+ Years Job Type: Full-Time Department: Technology / Engineering Job Summary: We are seeking a highly skilled and motivated Senior Programmer with a strong background in AI development, Python programming , and data engineering . The ideal candidate will have hands-on experience with OpenAI models , Machine Learning , Prompt Engineering , and frameworks such as NLTK , Pandas , and Numpy . You will work on developing intelligent systems, integrating APIs, and deploying scalable solutions using modern data and cloud technologies. Key Responsibilities: Design, develop, and optimize intelligent applications using OpenAI APIs and machine learning models. Create and refine prompts for Prompt Engineering to extract desired outputs from LLMs (Large Language Models). Build and maintain scalable, reusable, and secure REST APIs for AI and data applications. Work with large datasets using Pandas , NumPy , SQL , and integrate text analytics using NLTK . Collaborate with cross-functional teams to understand requirements and translate them into technical solutions. Use the Function Framework to encapsulate business logic and automate workflows. Apply basic knowledge of cloud platforms (AWS, Azure, or GCP) for deployment and scaling. Assist in data integration, processing, and transformation for Big Data systems. Write clean, maintainable, and efficient Python code. Conduct code reviews, mentor junior developers, and lead small projects as needed. Required Skills & Qualifications: Minimum 3 years of experience in Python development with a strong focus on AI and ML. Proven expertise in OpenAI tools and APIs . Hands-on experience with Machine Learning models and Prompt Engineering techniques. Solid programming skills using Python , along with libraries like Pandas , Numpy , and NLTK . Experience developing and integrating REST APIs . Working knowledge of SQL and relational database systems. Familiarity with Function Frameworks and modular design patterns. Basic understanding of cloud platforms (AWS/GCP/Azure) and Big Data concepts. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills. Preferred Qualifications: Exposure to Docker , Kubernetes , or similar container orchestration tools. Understanding of MLOps , data pipelines , or cloud-based AI deployments . Experience with version control systems like Git and CI/CD pipelines.
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
The Machine Learning Engineer position at our fast-growing AI/ML startup is ideal for a highly skilled and self-motivated individual with at least 4 years of experience. As a Machine Learning Engineer, you will be responsible for designing and deploying intelligent systems and advanced algorithms tailored to real-world business problems across diverse industries. The role requires a creative thinker with a strong mathematical foundation, hands-on experience in machine learning and deep learning, and the ability to work independently in a dynamic, agile environment. Your key responsibilities will include collaborating with cross-functional teams to design and develop machine learning and deep learning algorithms, translating complex client problems into mathematical models, building data pipelines and automated classification systems, conducting data mining, and applying supervised/unsupervised learning to extract meaningful insights. You will also be responsible for performing Exploratory Data Analysis (EDA), hypothesis generation, and pattern recognition, developing and implementing Natural Language Processing (NLP) techniques, extending and customizing ML libraries/frameworks, visualizing analytical findings, developing and integrating APIs, providing technical documentation and support, and staying updated with the latest trends in AI/ML. To qualify for this role, you should have a B.Tech/BE or M.Tech/MS in Computer Science, Computer Engineering, or a related field, a solid understanding of data structures, algorithms, probability, and statistical methods, proficiency in Python, R, or Java, hands-on experience with ML/DL frameworks and libraries, experience with cloud services, a strong grasp of NLP, predictive analytics, and deep learning algorithms, familiarity with big data technologies, expertise in building and deploying scalable AI/ML models, exceptional analytical, problem-solving, and communication skills, and a strong portfolio of applied ML use cases. Joining our team will provide you with the opportunity to work at the forefront of AI innovation, be part of a high-impact team driving AI solutions across industries, enjoy a flexible remote working culture with autonomy and ownership, receive competitive compensation, growth opportunities, and access to cutting-edge technology, and embrace our culture of Learning, Engaging, Achieving, and Pioneering (LEAP) in every project you touch. Apply now to be a part of our innovative and collaborative team!,
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Bosch Software Engineering, Data Science Pune, Maharashtra, India Posted on Jul 14, 2025 Apply now Company Description Bosch Global Software Technologies Private Limited is a 100% owned subsidiary of Robert Bosch GmbH, one of the world's leading global supplier of technology and services, offering end-to-end Engineering, IT and Business Solutions. With over 28,200+ associates, it’s the largest software development center of Bosch, outside Germany, indicating that it is the Technology Powerhouse of Bosch in India with a global footprint and presence in the US, Europe and the Asia Pacific region. Job Description Education and Work Experience Requirements: 5 to 8 years of experience as Data Scientist· 2 to 3 years of experience in Generative AI solution development· Strong understanding of AI agent collaboration, negotiation, and autonomous decision-making. Experience in developing and deploying AI agents that operate independently or collaboratively in complex environments. Deep knowledge of agentic AI principles, including self-improving, self-organizing, and goal-driven agents. Proficiency in multi-agent frameworks such as AutoGen, LangGraph, LangChain, and CrewAI for orchestrating AI workflows. Hands-on experience integrating LLMs (GPT, LLaMA, Mistral, etc.) with agentic frameworks to enhance automation and reasoning. Expertise in hierarchical agent frameworks, distributed agent coordination, and decentralized AI governance. Strong grasp of memory architectures, tool use, and action planning within AI agents. Autonomy Score: Measures the degree of independence in decision-making. Collaboration Efficiency: Evaluates the ability of agents to work together and share information. Task Completion Rate: Tracks the percentage of tasks successfully executed by agents. Response Time: Measures the latency in agent decision-making and execution. Adaptability Index: Assesses how well agents adjust to dynamic changes in the environment. Resource Utilization Efficiency: Evaluates computational and memory usage for optimization. Explainability & Interpretability Score: Ensures transparency in agent reasoning and outputs. Error Rate & Recovery Time: Tracks failures and the system’s ability to self-correct. Knowledge Retention & Utilization: Measures how effectively agents recall and apply information. Hands-on experience with LLMs such as GPT, BERT, LLaMA, Mistral, Claude, Gemini, etc. Proven expertise in both open-source (LLaMA, Gemma, Mixtral) and closed-source (OpenAI GPT, Azure OpenAI, Claude, Gemini) LLMs. Advanced skills in prompt engineering, tuning, retrieval-augmented generation (RAG), reinforcement learning (RAFT), and LLM fine-tuning (PEFT, LoRA, QLoRA). Strong understanding of small language models (SLMs) like Phi-3 and BERT, along with Transformer architectures. Experience working with text-to-image models such as Stable Diffusion, DALL·E, and Midjourney. Proficiency in vector databases such as Pinecone, Qdrant for knowledge retrieval in agentic AI systems. Deep understanding of Human-Machine Interaction (HMI) frameworks within cloud and on-prem environments. Strong grasp of deep learning architectures, including CNNs, RNNs, Transformers, GANs, and VAEs. Expertise in Python, R, TensorFlow, Keras, and PyTorch. Hands-on experience with NLP tools and libraries: OpenNLP, CoreNLP, WordNet, NLTK, SpaCy, Gensim, Knowledge Graphs, and LLM-based applications. Proficiency in advanced statistical methods and transformer-based text processing. Experience in reinforcement learning and planning techniques for autonomous agent behavior. Mandatory Skills: Design, develop, test, and deploy Machine Learning models using state-of-the-art algorithms with a strong focus on language models. Strong understanding of LLMs, and associated technologies like RAG, Agents, VectorDB and Guardrails· Hand-on experience in GenAI frameworks like LlamaIndex, Langchain, Autogen, etc. Experience in cloud services like Azure, GCP and AWS· Multi-agent frameworks: AutoGen, LangGraph, LangChain, CrewAI· Large Language Models (LLMs): GPT, Qualifications BE,B Tech, MTech Additional Information 7-11 years Apply now See more open positions at Bosch
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Bosch Software Engineering, Data Science Pune, Maharashtra, India Posted on Jul 14, 2025 Apply now Company Description Bosch Global Software Technologies Private Limited is a 100% owned subsidiary of Robert Bosch GmbH, one of the world's leading global supplier of technology and services, offering end-to-end Engineering, IT and Business Solutions. With over 28,200+ associates, it’s the largest software development center of Bosch, outside Germany, indicating that it is the Technology Powerhouse of Bosch in India with a global footprint and presence in the US, Europe and the Asia Pacific region. Job Description Education and Work Experience Requirements: 5 to 8 years of experience as Data Scientist or GenAI specialist· 2 to 3 years of experience in Generative AI solution development· Proven track record and experience with with GenAI technologieso Open source LLMs like Llama, Gemma, Mixtral etc o Closed source LLMs such as Open AI GPT, Azure Open AI, Claude, Gemini etc o Prompt Engineering/Tuning, RAG, RAFT, LLM finetuning such as PEFT(LoRA, QLoRA ..)o Understanding of SLMs such as Phi3, BERT and Transformer architectureo Vector databases like Pincone, Qdrant etc. Good knowledge of advanced statistical methods. Experience working with Text Data using transformer-based model· Expertise with the following scripting languages:o Python, R, Tensorflow, Keras, Pytorcho OpenNLP, CoreNLP, WordNet, NLTK, SpaCy, Gensim, Large Language Models, Knowledge Graphs· Good and experience of machine learning algorithms and ability to apply them in supervised and un-supervised NLP tasks. Knowledge of NLP algorithms that can handle various NLP tasks such as intent recognition, entity extraction, language modeling, text classification, question answering, text summarization, topic modeling and so on. Experience building and fine-tuning Language Models (LMs), such as BERT, ELMo, XLNet etc to solve bespoke NLP tasks. Tech savy and willing to work with open-Source Tools· Should have independently handled a project technically and provided directions to the other Team Members. Able to lead the project independently. Experience in turning ideas into actionable designs. Able to persuade stakeholders and champion effective techniques through development. Strong interpersonal and communication skills: ability to tell a clear, concise, actionable story with data, to folks across various levels of the company. Good to have foundational knowledge on Cloud, API frameworks like Flask, Fast API, Swagger/Postman tools· Prior experience working on Mobility or Healthcare domain will be a plus Mandatory Skills: Design, develop, test, and deploy Machine Learning models using state-of-the-art algorithms with a strong focus on language models. Strong understanding of LLMs, and associated technologies like RAG, Agents, VectorDB and Guardrails· Hand-on experience in GenAI frameworks like LlamaIndex, Langchain, Autogen, etc. Experience in cloud services like Azure, GCP and AWS· Interact with our research team and with key partners in the market to build end-to-end AI/ML/NLP solutions: Conversational AI, document understanding and QnA. Mine and analyze data, applying statistical methods as necessary, pertaining to customers’ discovery, and viewing experiences to identify critical product insights. Proactively develop new metrics and studies to quantify the value of different aspects of product. Drive efforts to enable product and engineering leaders to share your knowledge and insights through clear and concise communication, education, and data visualization. Translate analytic insights into concrete, actionable recommendations for business or product improvement. Build and improve reusable tools & modelling pipelines and support knowledge sharing across several teams. Define and deploy best practices in Machine Learning & MLOps/LLMOps, mentor and teach colleagues. Partner closely with product and engineering leaders throughout the lifecycle of project Additional Information:A Master’s or PhD is preferred (Computer Science / Machine Learning/ Mechanical, etc) from tier 1 institutions. If Bachelors, Machine Learning or Computer Science specialization only Qualifications BE,BTech, MCA,MTech Additional Information 5- 8 years Apply now See more open positions at Bosch
Posted 2 weeks ago
4.0 - 9.0 years
13 - 17 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Pyramid Overview As a Senior AI Engineer in Data Sciences, you will play crucial role in designing , implementing, and optimizing the AI solutions in production. Additionally, youll apply best practices in software design, participate in code reviews, create a maintainable well-tested codebase with relevant documentation. We will look to you to understand and actively follow foundational programming principles (best practices, know about unit tests, code organization, basics of CI/CD etc.) and create a well-maintainable & tested codebase with relevant documentation. You will get the opportunity to develop in one or more approved programming languages (Java, Scala, Python, R), and learn and adhere to best practices in data analysis and data understanding. Team Overview The Content Creation team within Data Sciences will help build AI solutions leveraging GenAI , computer vision and traditional ML models for various marketing or site content use-cases . The team will play a crucial role in helping Target drive relevancy with guests. P osition Overvie w Support Development of Generative AI ApplicationsArchitect and develop advanced generative AI solutions that support business objectives , ensuring high performance and scalability. Performance Tuning & Optimization Identify bottlenecks in applications and implement strategies to improve performance. Optimize machine learning models for efficiency in production environments. Collaborate Cross-FunctionallyWork closely with data scientists, product managers, and other stakeholders to gather requirements and transform them into robust technical solutions. Research & InnovationStay abreast of the latest advancements in the field of Artificial Intelligence. Propose new ideas that could lead to innovations within the organization. Deployment & Scaling Strategies Support the deployment process of applications on cloud platforms while ensuring they are scalable to handle increasing loads without compromising performance. Documentation & Quality AssuranceDevelop comprehensive documentation for projects undertaken. Implement rigorous testing methodologies to ensure high-quality deliverables. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Masters in Computer Science or o ver 3 plus years of experience in end-to-end application development, data exploration, data pipelining, API design, optimization of model latency Experience in working with image, text data, embeddings , building & deploying vision models , integrating with Gen AI services Experience working with Machine Learning frameworks such as TensorFlow, PyTorch or similar libraries Hands-on experience in libraries like NumPy, SciPy, Pandas, OpenCV, SpaCY , NLTK is expected Good understanding of Big Data and Distributed Architecture- specifically Hadoop, Hive, Spark, Docker, Kubernetes and Kafka Experience working on GPU s is preferred Experience optimizing machine learning models for performance improvements across various platforms including cloud services (AWS, Google Cloud Platform) Expertise in MLOps frameworks and hands on experience in MLOps tools like Kubeflow, MLFlow or Sagemaker Excellent problem-solving skills combined with strong analytical abilities. Know More here Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 2 weeks ago
4.0 - 9.0 years
14 - 19 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Pyramid O verview As a Lead AI Engineer in Data Sciences, you will play crucial role in designing, implementing, and optimizing the AI solutions in production. Additionally, youll apply best practices in software design, participate in code reviews, create a maintainable well-tested codebase with relevant documentation. We will look to you to understand and actively follow foundational programming principles (best practices, know about unit tests, code organization, basics of CI/CD etc.) and create a well-maintainable & tested codebase with relevant documentation. You will get the opportunity to develop in one or more approved programming languages (Java, Scala, Python, R ) and learn and adhere to best practices in data analysis and data understanding. Team Overview The Content Creation team within Data Sciences will help build AI solutions leveraging GenAI , computer vision and traditional ML models for various marketing or site content use-cases . The team will play a crucial role in helping Target drive relevancy with guests . Position Overvie w Lead Development of Generative AI ApplicationsArchitect and develop advanced generative AI solutions that support business objectives , ensuring high performance and scalability. Performance Tuning & Optimization Identify bottlenecks in applications and implement strategies to improve performance. Optimize machine learning models for efficiency in production environments. Collaborate Cross-FunctionallyWork closely with data scientists, product managers, and other stakeholders to gather requirements and transform them into robust technical solutions. Mentor Junior EngineersProvide guidance and mentorship to team members on best practices in coding standards, architectural design, and machine learning techniques. Research & InnovationStay abreast of the latest advancements in the field of Artificial Intelligence. Propose new ideas that could lead to innovations within the organization. Deployment & Scaling StrategiesLead the deployment process of applications on cloud platforms while ensuring they are scalable to handle increasing loads without compromising performance. Documentation & Quality AssuranceDevelop comprehensive documentation for projects undertaken. Implement rigorous testing methodologies to ensure high-quality deliverables. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Masters in computer science or equivalent industry experience Over 6 plus years of experience in end-to-end application development, data exploration, data pipelining, API design, optimization of model latency in production environments at scale. Experience in working with image, text data, embeddings , building & deploying vision models , integrating with Gen AI services Strong expertise in Machine Learning frameworks such as TensorFlow, PyTorch or similar libraries; experience with Generative Adversarial Networks or Diffusion Models is desirable Hands-on experience in libraries like NumPy, SciPy, Pandas, OpenCV, SpaCY , NLTK is expected Experience in optimizing ML model performance using techniques such as hyperparameter tuning, feature selection, and distributed training Good understanding of Big Data and Distributed Architecture- specifically Hadoop, Hive, Spark, Docker, Kubernetes and Kafka Experience working on GPU s is preferred Proven track record of optimizing machine learning models for performance improvements across various platforms including cloud services (AWS, Google Cloud Platform) Expertise in MLOps frameworks and hands on experience in MLOps tools like Kubeflow, MLFlow or Sagemaker Deep understanding of system architecture principles related to scalability and robust application design. Excellent communication, problem-solving skills combined with strong analytical abilities. Proven leadership skills with experience mentoring junior engineers effectively Know More here Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 2 weeks ago
1.0 years
0 Lacs
Vadodara
On-site
Responsibilities Assist in designing and refining prompts for large language models (LLMs) to achieve accurate outputs. Support data labeling, text preprocessing, and data preparation for NLP workflows. Help evaluate prompt performance and suggest improvements based on testing. Contribute to small-scale NLP experiments and proof-of-concept studies. Document prompt variations and results clearly for future reference. Collaborate with senior engineers and data scientists to integrate prompts into applications. Assist in maintaining prompt libraries and managing version control. Participate in team brainstorming sessions for new NLP features or improvements. Skills Must-have 1+ year of experience in natural language processing and prompt engineering focused on LLMs. Hands-on experience in Python, leveraging NLTK or spaCy for text processing. Good-to-have Exposure to OpenAI API or similar LLM services for prompt design and testing. Familiarity with Hugging Face Transformers for basic fine-tuning and experimentation. Will be a plus Understanding of LangChain (basic) and Pinecone for vector-based retrieval tasks. Experience with tools like Streamlit or Gradio for simple NLP demos.
Posted 2 weeks ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization.The candidate should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Responsibilities Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making by FCSD business team Design and implement data analysis and ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness Qualifications Master's degree in computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. Proficient in querying and analyzing large datasets using BigQuery on GCP. Strong Python skills for data wrangling and automation. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or QlikSense. 1+ years' experience in SQL programming language and relational databases.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Role: AI Engineer Intern (Full-time Internship, Remote) This is a remote full-time paid internship for an AI Engineer. You will help us push the boundaries of what LLMs can do by designing, testing, and optimizing prompts, building multi-step prompt pipelines, writing scaffolding code around LLM calls, benchmarking outputs, and integrating AI features into real-world products using Python and NLP techniques. Responsibilities Prompt Crafting: Design, edit, and refine prompts for different use cases and models. Prompt Chaining: Break down complex tasks into smaller subtasks and build effective multi-step prompt workflows (e.g., summarization → critique → rewrite). Benchmarking & Evaluation: Use Python (scripts and notebooks) to run automated performance benchmarks based on KPIs like accuracy, cost, and latency. Feature Building: Write Python scaffolding to integrate LLM calls into usable product features and pipelines. Hybrid NLP: Use traditional NLP techniques (e.g., regex, spaCy, NLTK) alongside LLMs to improve output quality, preprocessing, or efficiency. Iterative Improvement: Run A/B tests, gather output samples, and tweak prompts or logic based on failure cases and edge conditions. KPI Optimization: Ensure prompt chains and model outputs meet goals like quality, relevance, length, and compute cost. Model Awareness: Stay updated with the latest in GPT, Claude, Gemini, and open-source LLMs. Tooling & Automation: Build or use lightweight tooling for prompt testing, logging, and result comparison. Documentation: Maintain a structured prompt and workflow logbook with evaluations, learnings, and architecture. Salary: Rs.15,000/month Duration: 3–6 months Must-Have Qualifications Strong Python scripting and automation experience. Experience using OpenAI or other LLM APIs. Ability to build small-scale tools or workflows that integrate and manage prompt-based logic. Understanding of prompt chaining or multi-step reasoning with LLMs. Familiarity with Jupyter/Colab for fast prototyping. Awareness of basic NLP techniques and when to combine them with LLM outputs. Comfort with debugging and improving outputs using test inputs and edge cases. Strong communication and analytical skills. Familiarity with basic evaluation techniques (BLEU, ROUGE, token count, etc.). Nice-to-Have LangChain or similar framework experience. Experience working with vector DBs, retrieval-augmented generation (RAG), or memory components. Knowledge of managing context windows, formatting outputs, or chaining across models. Open-source contributions, AI blog posts, or published prompt collections. Ideal For Final-year students who have semester break for internship or recent grads ambitious about AI/LLMs and looking to develop real-world AI engineering, prompt design, and hybrid NLP-LLM skills. You’ll work closely with founders to build robust, high-performing AI workflows and features for production-grade products. PPO offer post successful internship.
Posted 2 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are The next step of your career starts here, where you can bring your own unique mix of skills and perspectives to a fast-growing team. Metyis is a global and forward-thinking firm operating across a wide range of industries, developing and delivering AI & Data, Digital Commerce, Marketing & Design solutions and Advisory services. At Metyis, our long-term partnership model brings long-lasting impact and growth to our business partners and clients through extensive execution capabilities. With our team, you can experience a collaborative environment with highly skilled multidisciplinary experts, where everyone has room to build bigger and bolder ideas. Being part of Metyis means you can speak your mind and be creative with your knowledge. Imagine the things you can achieve with a team that encourages you to be the best version of yourself. We are Metyis. Partners for Impact. What We Offer Interact with C-level at our clients on regular basis to drive their business towards impactful change Lead your team in creating new business solutions Seize opportunities at the client and at Metyis in our entrepreneurial environment Become part of a fast growing international and diverse team What You Will Do Lead and manage the delivery of complex data science projects, ensuring quality and timelines. Engage with clients and business stakeholders to understand business challenges and translate them into analytical solutions. Design solution architectures and guide the technical approach across projects. Align technical deliverables with business goals, ensuring data products create measurable business value. Communicate insights clearly through presentations, visualizations, and storytelling for both technical and non-technical audiences. Promote best practices in coding, model validation, documentation, and reproducibility across the data science lifecycle. Collaborate with cross functional teams to ensure smooth integration and deployment of solutions. Drive experimentation and innovation in AI/ML techniques, including newer fields - Generative AI. What You’ll Bring 6+ years of experience in delivering full-lifecycle data science projects. Proven ability to lead cross-functional teams and manage client interactions independently. Strong business understanding with the ability to connect data science outputs to strategic business outcomes. Experience with stakeholder management, translating business questions into data science solutions. Track record of mentoring junior team members and creating a collaborative learning environment. Familiarity with data productization and ML systems in production, including pipelines, monitoring, and scalability. Experience managing project roadmaps, resourcing, and client communication. Tools & Technologies: Strong hands-on experience in Python/R and SQL. Good understanding and Experience with cloud platforms such as Azure, AWS, or GCP. Experience with data visualization tools in python like – Seaborn, Plotly. Good understanding of Git concepts. Good experience with data manipulation tools in python like Pandas and Numpy. Must have worked with scikit learn, NLTK, Spacy, transformers. Experience with dashboarding tools such as Power BI and Tableau to create interactive and insightful visualizations. Proficient in using deployment and containerization tools like Docker and Kubernetes for building and managing scalable applications. Core Competencies: Strong foundation in machine learning algorithms, predictive modeling, and statistical analysis. Good understanding of deep learning concepts, especially in NLP and Computer Vision applications. Proficiency in time-series forecasting and business analytics for functions like marketing, sales, operations, and CRM. Exposure to tools like – Mlflow, model deployment, API integration, and CI/CD pipelines. Hands on experience with MLOps and model governance best practices in production environments. Experience in developing optimization and recommendation system solutions to enhance decision-making, user personalization, and operational efficiency across business functions. Good to have: Generative AI Experience with text and Image data. Familiarity with LLM frameworks such as LangChain and hubs like Hugging Face. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate) for semantic search or retrieval-augmented generation (RAG). In a changing world, diversity and inclusion are core values for team well-being and performance. At Metyis, we want to welcome and retain all talents, regardless of gender, age, origin or sexual orientation, and irrespective of whether or not they are living with a disability, as each of them has their own experience and identity.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
vadodara, gujarat
On-site
Navaera Worldwide is a global, full-service firm specializing in advanced knowledge management products and services tailored to assist Financial organizations in enhancing operational efficiency, managing risk, detecting fraud, and gaining a competitive advantage. The company, a privately held entity, boasts a diverse client base across the globe, including major corporations within the financial services industry. Navaera Worldwide, under the brand Navaera Global Services, offers sophisticated business products and solutions to organizations of varying sizes. With offices situated across three continents, Navaera Worldwide's global headquarters is based in New York, with additional locations in Toronto, Canada, and Baroda, India. As a Specialist in Emerging Systems Engineering, you will be expected to possess a solid 3 to 5 years of experience range and be adept at working in areas such as Data Analytics, Computer vision, NLP, and OCR-related tasks. Your responsibilities will include data collection, dataset preparation, architecture modeling, model training/testing, and deployment. Additionally, you should have experience in managing digital data extraction and processing, creating training and testing pipelines with diverse data aspects, and developing model architectures from scratch or modifying pre-trained models. The ideal candidate must stay updated with the latest AI trends to ensure operational processes remain current. You will be required to identify opportunities for process enhancements, propose system modifications, and work with minimal technical supervision while efficiently managing multiple program priorities. Collaboration within a team environment is crucial, as you will be tasked with driving cross-team solutions that may have complex dependencies and requirements. Your qualifications should include a B.E./B.Tech in Computer Science/Statistics/IT/ECE stream, with an M.S. or Ph.D. in Applied Mathematics, Statistics, Computer Science, Operations Research, Economics, or equivalent. You should have 3-5 years of hands-on experience in data science, machine learning, and deep learning, along with proficiency in C/C++/Java/Python for developing machine learning and deep learning solutions. Experience in building and deploying Machine Learning solutions using various supervised and unsupervised ML algorithms is essential. Moreover, you should have at least 3-5 years of practical experience with Python and frameworks such as TensorFlow, PyTorch, or MXnet. A good command of ML libraries like scikit-learn, NumPy, pandas, and Keras, as well as natural language processing using NLTK, spaCy, and Gensim, is required. Strong communication and interpersonal skills, familiarity with decision science tools and techniques, excellent problem-solving abilities, and the capacity to work efficiently in a fast-paced, deadline-driven environment are all highly valued in this role. Demonstrating a strong work ethic, a sense of collaboration, and ownership is imperative for success within the organization.,
Posted 2 weeks ago
3.0 years
7 - 9 Lacs
Chandigarh
On-site
Job Title: Data Scientist Location: Mohali Employment Type: Full-TimeAbout the RoleWe’re looking for a skilled and driven Data Scientist to join our team in Mohali. This role focuses on building and optimizing machine learning models from classical approaches to advanced neural networks for real-world applications. You’ll work with large datasets, uncover insights, and help build data products that scale.Key Responsibilities Design, develop, and evaluate supervised and unsupervised machine learning models. Collaborate with data and product teams to deliver end-to-end ML solutions. Analyze large datasets using Python and SQL to generate actionable insights. Continuously monitor and improve model performance using real-world data. Maintain thorough documentation of models, experiments, and processes.Requirements Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. Strong Python programming skills (scikit-learn, TensorFlow/PyTorch, pandas, NumPy, etc.). Proficiency in SQL for data exploration and manipulation. Solid understanding of core ML techniques: classification, regression, clustering, and dimensionality reduction. Experience with NLP/LLM libraries such as Hugging Face, spaCy, NLTK, or OpenAI APIs. Strong foundation in statistics, probability, and hypothesis testing.Preferred Experience with vector databases, embeddings, and fine-tuning or prompt engineering for LLMs. Familiarity with cloud services (AWS, GCP, or Azure) is a plus. Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹900,000.00 per year Benefits: Paid sick time Provident Fund Schedule: Day shift Morning shift Ability to commute/relocate: Chandigarh, Chandigarh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Scientist: 3 years (Required) Data science: 3 years (Required) Language: English (Required) Location: Chandigarh, Chandigarh (Required) Work Location: In person
Posted 2 weeks ago
5.0 - 10.0 years
12 Lacs
Hyderabad
Work from Office
Dear Candidate, We are seeking a highly skilled and motivated Software Engineer with expertise in Azure AI , Cognitive Services, Machine Learning , and IoT. The ideal candidate will design, develop, and deploy intelligent applications leveraging Azure cloud technologies, AI-driven solutions, and IoT infrastructure to drive business innovation and efficiency. Responsibilities: Develop and implement AI-driven applications using Azure AI and Cognitive Services . Design and deploy machine learning models to enhance automation and decision-making processes. Integrate IoT solutions with cloud platforms to enable real-time data processing and analytics. Collaborate with cross-functional teams to architect scalable, secure, and high-performance solutions. Optimize and fine-tune AI models for accuracy, performance, and cost-effectiveness. Ensure best practices in cloud security, data governance, and compliance. Monitor, maintain, and troubleshoot AI and IoT solutions in production environments. Stay updated with the latest advancements in AI, ML, and IoT technologies to drive innovation. Required Skills and Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Strong experience with Azure AI, Cognitive Services, and Machine Learning. Proficiency in IoT architecture, data ingestion, and processing using Azure IoT Hub , Edge , or related services. Expertise in deploying and managing machine learning models in cloud environments. Strong understanding of RESTful APIs, microservices, and cloud-native application development. Experience with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes). Knowledge of cloud security principles and best practices. Excellent problem-solving skills and the ability to work in an agile development environment. Preferred Qualifications: Certifications in Microsoft Azure AI, IoT, or related cloud technologies. Experience with Natural Language Processing (NLP) and Computer Vision. Familiarity with big data processing and analytics tools such as Azure Data. Prior experience in deploying edge computing solutions. Soft Skills: Problem-Solving: Ability to analyze complex problems and develop effective solutions. Communication Skills: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. Analytical Thinking: Ability to think critically and analytically to solve technical challenges. Time Management: Capable of managing multiple tasks and deadlines in a fast-paced environment. Adaptability: Ability to quickly learn and adapt to new technologies and methodologies.
Posted 2 weeks ago
0.0 - 3.0 years
7 - 9 Lacs
Chandigarh, Chandigarh
On-site
Job Title: Data Scientist Location: Mohali Employment Type: Full-TimeAbout the RoleWe’re looking for a skilled and driven Data Scientist to join our team in Mohali. This role focuses on building and optimizing machine learning models from classical approaches to advanced neural networks for real-world applications. You’ll work with large datasets, uncover insights, and help build data products that scale.Key Responsibilities Design, develop, and evaluate supervised and unsupervised machine learning models. Collaborate with data and product teams to deliver end-to-end ML solutions. Analyze large datasets using Python and SQL to generate actionable insights. Continuously monitor and improve model performance using real-world data. Maintain thorough documentation of models, experiments, and processes.Requirements Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. Strong Python programming skills (scikit-learn, TensorFlow/PyTorch, pandas, NumPy, etc.). Proficiency in SQL for data exploration and manipulation. Solid understanding of core ML techniques: classification, regression, clustering, and dimensionality reduction. Experience with NLP/LLM libraries such as Hugging Face, spaCy, NLTK, or OpenAI APIs. Strong foundation in statistics, probability, and hypothesis testing.Preferred Experience with vector databases, embeddings, and fine-tuning or prompt engineering for LLMs. Familiarity with cloud services (AWS, GCP, or Azure) is a plus. Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹900,000.00 per year Benefits: Paid sick time Provident Fund Schedule: Day shift Morning shift Ability to commute/relocate: Chandigarh, Chandigarh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Scientist: 3 years (Required) Data science: 3 years (Required) Language: English (Required) Location: Chandigarh, Chandigarh (Required) Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Marvell Marvell’s semiconductor solutions are the essential building blocks of the data infrastructure that connects our world. Across enterprise, cloud and AI, automotive, and carrier architectures, our innovative technology is enabling new possibilities. At Marvell, you can affect the arc of individual lives, lift the trajectory of entire industries, and fuel the transformative potential of tomorrow. For those looking to make their mark on purposeful and enduring innovation, above and beyond fleeting trends, Marvell is a place to thrive, learn, and lead. Your Team, Your Impact Marvell Technology is a global leader in the semiconductor industry, specializing in the design and development of high-performance semiconductor solutions that enable the seamless movement of data across various platforms. Marvell's innovative technology powers the world's leading products in storage, networking, and connectivity. We are seeking a motivated People Analytics Intern to join our team. In this role, you’ll work with people data to solve organizational challenges by building solutions using AI/ML models, including large language models (LLMs). You will primarily focus on automating text analysis, building predictive models and identifying patterns to help improve business outcomes. Internship Duration- 3 months. What You Can Expect Collaborate with the HR Leaders and People Analytics team to translate business challenges into technical solutions. Develop and deploy AI/ML solutions to analyze and interpret workforce data, providing predictive and prescriptive insights. Fine-tune pre-trained LLMs to align with organizational context and specific business objectives. Build interactive dashboards for non-technical stakeholders to explore text-based insights. Stay up to date on the latest developments in LLM architectures and NLP, document findings and present results to the team. What We're Looking For Currently pursuing a degree in Data Science, Artificial Intelligence, Computer Science or a related field. Proficiency in Python, R, or similar languages for data analysis and model development. Familiarity with NLP tools and libraries (e.g., NLTK, SpaCy). Experience with large language models (LLMs) such as GPT, LLAMA, or similar transformer-based architectures. Understanding of natural language processing tasks, such as text classification, sentiment analysis, and named entity recognition. Strong analytical, problem-solving and communication skills, with the ability to present complex technical information clearly Ability to work independently and collaboratively as part of a team. Preferred Qualifications: Experience working with HR data and an understanding of people metrics. Knowledge of cloud platforms (example: AWS, Azure) for model deployment. Familiarity with visualization tools (e.g., Power BI, Tableau). Prior research experience or internships in AI/ML. Additional Compensation And Benefit Elements With competitive compensation and great benefits, you will enjoy our workstyle within an environment of shared collaboration, transparency, and inclusivity. We’re dedicated to giving our people the tools and resources they need to succeed in doing work that matters, and to grow and develop with us. For additional information on what it’s like to work at Marvell, visit our Careers page. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status.
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us: OpZen is an innovative early-stage startup founded by a team of visionary entrepreneurs with a stellar track record of building successful ventures such as Mitchell Madison, Opera Solutions, and Zenon. Our mission is to revolutionize the finance industry through the creation of groundbreaking AI-driven products and the provision of elite consulting services. We are committed to harnessing the power of advanced technology to deliver transformative solutions that drive unparalleled efficiency, foster innovation, and spur growth for our clients. Join us on our exciting journey to redefine the future of finance and leave an indelible mark on the industry. Role: Lead/Manager Overview: Overview: We are seeking a visionary and dynamic individual to lead our AI initiatives and data-driven strategies. This role is crucial in shaping the future of our company by leveraging advanced technologies to drive innovation and growth. The ideal candidate will possess a deep understanding of AI, machine learning, and data analytics, along with a proven track record in leadership and strategic execution. Key Responsibilities: Self-Driven Initiative: Take ownership of projects and drive them to successful completion with minimal supervision, demonstrating a proactive and entrepreneurial mindset. Stakeholder Communication: Present insights, findings, and strategic recommendations to senior management and key stakeholders, fostering a data-driven decision-making culture. Executive Collaboration: Report directly to the founders and collaborate with other senior leaders to shape the company's direction and achieve our ambitious goals. Innovation & Problem-Solving: Foster a culture of innovative thinking and creative problem-solving to tackle complex challenges and drive continuous improvement. AI Research & Development: Oversee AI research and development initiatives, ensuring the integration of cutting-edge technologies and methodologies. Data Management: Ensure effective data collection, management, and analysis to support AI-driven decision-making and product development. Required Skills and Qualifications: Bachelor's degree from a Tier 1 institution or an MBA from a recognized institution. Proven experience in a managerial role, preferably in a startup environment. Strong leadership and team management skills. Excellent strategic thinking and problem-solving abilities. Exceptional communication and interpersonal skills. Ability to thrive in a fast-paced, dynamic environment. Entrepreneurial mindset with a passion for innovation and growth. Extensive experience with AI technologies, machine learning, and data analytics. Proficiency in programming languages such as Python, R, or similar. Familiarity with data visualization tools like Tableau, Power BI, or similar. Strong understanding of data governance, privacy, and security best practices. Technical Skills: Machine Learning Frameworks: Expertise in frameworks such as TensorFlow, PyTorch, or Scikit-learn. Data Processing: Proficiency in using tools like Apache Kafka, Apache Flink, or Apache Beam for real-time data processing. Database Management: Experience with SQL and NoSQL databases, including MySQL, PostgreSQL, MongoDB, or Cassandra. Big Data Technologies: Hands-on experience with Hadoop, Spark, Hive, or similar big data technologies. Cloud Computing: Strong knowledge of cloud services and infrastructure, including AWS (S3, EC2, SageMaker), Google Cloud (BigQuery, Dataflow), or Azure (Data Lake, Machine Learning). DevOps and MLOps: Familiarity with CI/CD pipelines, containerization (Docker, Kubernetes), and orchestration tools for deploying and managing machine learning models. Data Visualization: Advanced skills in data visualization tools such as Tableau, Power BI, or D3.js to create insightful and interactive dashboards. Natural Language Processing (NLP): Experience with NLP techniques and tools like NLTK, SpaCy, or BERT for text analysis and processing. Large Language Models (LLMs): Proficiency in working with LLMs such as GPT-3, GPT-4, or similar for natural language understanding and generation tasks. Computer Vision: Knowledge of computer vision technologies and libraries such as OpenCV, YOLO, or TensorFlow Object Detection API. Preferred Experience: Proven Track Record: Demonstrated success in scaling businesses or leading teams through significant growth phases, showcasing your ability to drive impactful results. AI Expertise: Deep familiarity with the latest AI tools and technologies, including Generative AI applications, with a passion for staying at the forefront of technological advancements. Startup Savvy: Hands-on experience in early-stage startups, with a proven ability to navigate the unique challenges and seize the opportunities that come with building a company from the ground up. Finance Industry Insight: Extensive experience in the finance industry, with a comprehensive understanding of its dynamics, challenges, and opportunities, enabling you to drive innovation and deliver exceptional value to clients. Why Join Us: Opportunity to work closely with experienced founders and learn from their entrepreneurial journey. Make a significant impact on a growing company and shape its future. Collaborative and innovative work environment. Lucrative compensation package including competitive salary and equity options.
Posted 3 weeks ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Please mention subject line “Machine Learning Engineer - ESLNK59” while applying to hr@evoortsolutions.com Job Title: Machine Learning Engineer Location: Remote | Full-Time Experience: 4+ years Job Summary We are seeking a highly skilled and self-motivated Machine Learning Engineer / Senior Machine Learning Engineer to join our fast-growing AI/ML startup. You will be responsible for designing and deploying intelligent systems and advanced algorithms tailored to real-world business problems across diverse industries. This role demands a creative thinker with a strong mathematical foundation, hands-on experience in machine learning and deep learning, and the ability to work independently in a dynamic, agile environment. Key Responsibilities Design and develop machine learning and deep learning algorithms in collaboration with cross-functional teams, including data scientists and business stakeholders. Translate complex client problems into mathematical models and identify the most suitable AI/ML approach. Build data pipelines and automated classification systems using advanced ML/AI models. Conduct data mining and apply supervised/unsupervised learning to extract meaningful insights. Perform Exploratory Data Analysis (EDA), hypothesis generation, and pattern recognition from structured and unstructured datasets. Develop and implement Natural Language Processing (NLP) techniques for sentiment analysis, text classification, entity recognition, etc. Extend and customize ML libraries/frameworks like PyTorch, TensorFlow, and Scikit-learn. Visualize and communicate analytical findings using tools such as Tableau, Matplotlib, ggplot, etc. Develop and integrate APIs to deploy ML solutions on cloud-based platforms (AWS, Azure, GCP). Provide technical documentation and support for product development, business proposals, and client presentations. Stay updated with the latest trends in AI/ML and contribute to innovation-driven projects. Required Skills & Qualifications Education: B.Tech/BE or M.Tech/MS in Computer Science, Computer Engineering, or related field. Solid understanding of data structures, algorithms, probability, and statistical methods. Proficiency in Python, R, or Java for building ML models. Hands-on experience with ML/DL frameworks such as PyTorch, Keras, TensorFlow, and libraries like Scikit-learn, SpaCy, NLTK, etc. Experience with cloud services (PaaS/SaaS), RESTful APIs, and microservices architecture. Strong grasp of NLP, predictive analytics, and deep learning algorithms. Familiarity with big data technologies like Hadoop, Spark, Hive, Kafka, and NoSQL databases is a plus. Expertise in building and deploying scalable AI/ML models in production environments. Ability to work independently in an agile team setup and handle multiple priorities simultaneously. Exceptional analytical, problem-solving, and communication skills. Strong portfolio or examples of applied ML use cases in real-world applications. Why Join Us? Opportunity to work at the forefront of AI innovation and solve real-world challenges. Be part of a lean, fast-paced, and high-impact team driving AI solutions across industries. Flexible remote working culture with autonomy and ownership. Competitive compensation, growth opportunities, and access to cutting-edge technology. Embrace our culture of Learning, Engaging, Achieving, and Pioneering (LEAP) in every project you touch.
Posted 3 weeks ago
4.0 years
0 Lacs
Madurai, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: TensorFlow, PyTorch, rag, LangChain Forbes Advisor is Looking for: Location - Remote (For candidate's from Chennai or Mumbai it's hybrid) Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Extraction Team is a brand-new team who plays a crucial role in our organization by designing, implementing, and overseeing advanced web scraping frameworks. Their core function involves creating and refining tools and methodologies to efficiently gather precise and meaningful data from a diverse range of digital platforms. Additionally, this team is tasked with constructing robust data pipelines and implementing Extract, Transform, Load (ETL) processes. These processes are essential for seamlessly transferring the harvested data into our data storage systems, ensuring its ready availability for analysis and utilization. A typical day in the life of a Data Research Engineer will involve coming up with ideas regarding how the company/team can best harness the power of AI/LLM, and use it not only simplify operations within the team, but also to streamline the work of the research team in gathering/retrieving large sets of data. The role is that of a leader who sets a vision for the future of AI/LLM’s use within the team and the company. They think outside the box and are proactive in engaging with new technologies and developing new ideas for the team to move forward in the AI/LLM field. The candidate should also at least be willing to acquire some basic skills in scraping and data pipelining. Responsibilities: Develop methods to leverage the potential of LLM and AI within the team. Proactive at finding new solutions to engage the team with AI/LLM, and streamline processes in the team. Be a visionary with AI/LLM tools and predict how the use of future technologies could be harnessed early on so that when these technologies come out, the team is ahead of the game regarding how it could be used. Assist in acquiring and integrating data from various sources, including web crawling and API integration. Stay updated with emerging technologies and industry trends. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Contribute to cross-functional teams in understanding data requirements. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Data Science, or a related field. Higher qualifications is a plus. Think proactively and creatively regarding the next AI/LLM technologies and how to use them to the team’s and company’s benefits. “Think outside the box” mentality. Experience prompting LLMs in a streamlined way, taking into account how the LLM can potentially “hallucinate” and return wrong information. Experience building agentic AI platforms with modular capabilities and autonomous task execution. (crewai, lagchain, etc.) Proficient in implementing Retrieval-Augmented Generation (RAG) pipelines for dynamic knowledge integration. (chromadb, pinecone, etc) Experience managing a team of AI/LLM experts is a plus: this includes setting up goals and objectives for the team and fine-tuning complex models. Strong proficiency in Python programming Proficiency in SQL and data querying is a plus. Familiarity with web crawling techniques and API integration is a plus but not a must. Experience in AI/ML engineering and data extraction Experience with LLMs, NLP frameworks (spaCy, NLTK, Hugging Face, etc.) Strong understanding of machine learning frameworks (TensorFlow, PyTorch) Design and build AI models using LLMs Integrate LLM solutions with existing systems via APIs Collaborate with the team to implement and optimize AI solutions Monitor and improve model performance and accuracy Familiarity with Agile development methodologies is a plus. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Ability to work collaboratively in a team environment. Good and effective communication skills. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
4.0 years
0 Lacs
Faridabad, Haryana, India
Remote
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: TensorFlow, PyTorch, rag, LangChain Forbes Advisor is Looking for: Location - Remote (For candidate's from Chennai or Mumbai it's hybrid) Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Extraction Team is a brand-new team who plays a crucial role in our organization by designing, implementing, and overseeing advanced web scraping frameworks. Their core function involves creating and refining tools and methodologies to efficiently gather precise and meaningful data from a diverse range of digital platforms. Additionally, this team is tasked with constructing robust data pipelines and implementing Extract, Transform, Load (ETL) processes. These processes are essential for seamlessly transferring the harvested data into our data storage systems, ensuring its ready availability for analysis and utilization. A typical day in the life of a Data Research Engineer will involve coming up with ideas regarding how the company/team can best harness the power of AI/LLM, and use it not only simplify operations within the team, but also to streamline the work of the research team in gathering/retrieving large sets of data. The role is that of a leader who sets a vision for the future of AI/LLM’s use within the team and the company. They think outside the box and are proactive in engaging with new technologies and developing new ideas for the team to move forward in the AI/LLM field. The candidate should also at least be willing to acquire some basic skills in scraping and data pipelining. Responsibilities: Develop methods to leverage the potential of LLM and AI within the team. Proactive at finding new solutions to engage the team with AI/LLM, and streamline processes in the team. Be a visionary with AI/LLM tools and predict how the use of future technologies could be harnessed early on so that when these technologies come out, the team is ahead of the game regarding how it could be used. Assist in acquiring and integrating data from various sources, including web crawling and API integration. Stay updated with emerging technologies and industry trends. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Contribute to cross-functional teams in understanding data requirements. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Data Science, or a related field. Higher qualifications is a plus. Think proactively and creatively regarding the next AI/LLM technologies and how to use them to the team’s and company’s benefits. “Think outside the box” mentality. Experience prompting LLMs in a streamlined way, taking into account how the LLM can potentially “hallucinate” and return wrong information. Experience building agentic AI platforms with modular capabilities and autonomous task execution. (crewai, lagchain, etc.) Proficient in implementing Retrieval-Augmented Generation (RAG) pipelines for dynamic knowledge integration. (chromadb, pinecone, etc) Experience managing a team of AI/LLM experts is a plus: this includes setting up goals and objectives for the team and fine-tuning complex models. Strong proficiency in Python programming Proficiency in SQL and data querying is a plus. Familiarity with web crawling techniques and API integration is a plus but not a must. Experience in AI/ML engineering and data extraction Experience with LLMs, NLP frameworks (spaCy, NLTK, Hugging Face, etc.) Strong understanding of machine learning frameworks (TensorFlow, PyTorch) Design and build AI models using LLMs Integrate LLM solutions with existing systems via APIs Collaborate with the team to implement and optimize AI solutions Monitor and improve model performance and accuracy Familiarity with Agile development methodologies is a plus. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Ability to work collaboratively in a team environment. Good and effective communication skills. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
4.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: TensorFlow, PyTorch, rag, LangChain Forbes Advisor is Looking for: Location - Remote (For candidate's from Chennai or Mumbai it's hybrid) Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Extraction Team is a brand-new team who plays a crucial role in our organization by designing, implementing, and overseeing advanced web scraping frameworks. Their core function involves creating and refining tools and methodologies to efficiently gather precise and meaningful data from a diverse range of digital platforms. Additionally, this team is tasked with constructing robust data pipelines and implementing Extract, Transform, Load (ETL) processes. These processes are essential for seamlessly transferring the harvested data into our data storage systems, ensuring its ready availability for analysis and utilization. A typical day in the life of a Data Research Engineer will involve coming up with ideas regarding how the company/team can best harness the power of AI/LLM, and use it not only simplify operations within the team, but also to streamline the work of the research team in gathering/retrieving large sets of data. The role is that of a leader who sets a vision for the future of AI/LLM’s use within the team and the company. They think outside the box and are proactive in engaging with new technologies and developing new ideas for the team to move forward in the AI/LLM field. The candidate should also at least be willing to acquire some basic skills in scraping and data pipelining. Responsibilities: Develop methods to leverage the potential of LLM and AI within the team. Proactive at finding new solutions to engage the team with AI/LLM, and streamline processes in the team. Be a visionary with AI/LLM tools and predict how the use of future technologies could be harnessed early on so that when these technologies come out, the team is ahead of the game regarding how it could be used. Assist in acquiring and integrating data from various sources, including web crawling and API integration. Stay updated with emerging technologies and industry trends. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Contribute to cross-functional teams in understanding data requirements. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Data Science, or a related field. Higher qualifications is a plus. Think proactively and creatively regarding the next AI/LLM technologies and how to use them to the team’s and company’s benefits. “Think outside the box” mentality. Experience prompting LLMs in a streamlined way, taking into account how the LLM can potentially “hallucinate” and return wrong information. Experience building agentic AI platforms with modular capabilities and autonomous task execution. (crewai, lagchain, etc.) Proficient in implementing Retrieval-Augmented Generation (RAG) pipelines for dynamic knowledge integration. (chromadb, pinecone, etc) Experience managing a team of AI/LLM experts is a plus: this includes setting up goals and objectives for the team and fine-tuning complex models. Strong proficiency in Python programming Proficiency in SQL and data querying is a plus. Familiarity with web crawling techniques and API integration is a plus but not a must. Experience in AI/ML engineering and data extraction Experience with LLMs, NLP frameworks (spaCy, NLTK, Hugging Face, etc.) Strong understanding of machine learning frameworks (TensorFlow, PyTorch) Design and build AI models using LLMs Integrate LLM solutions with existing systems via APIs Collaborate with the team to implement and optimize AI solutions Monitor and improve model performance and accuracy Familiarity with Agile development methodologies is a plus. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Ability to work collaboratively in a team environment. Good and effective communication skills. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: TensorFlow, PyTorch, rag, LangChain Forbes Advisor is Looking for: Location - Remote (For candidate's from Chennai or Mumbai it's hybrid) Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Extraction Team is a brand-new team who plays a crucial role in our organization by designing, implementing, and overseeing advanced web scraping frameworks. Their core function involves creating and refining tools and methodologies to efficiently gather precise and meaningful data from a diverse range of digital platforms. Additionally, this team is tasked with constructing robust data pipelines and implementing Extract, Transform, Load (ETL) processes. These processes are essential for seamlessly transferring the harvested data into our data storage systems, ensuring its ready availability for analysis and utilization. A typical day in the life of a Data Research Engineer will involve coming up with ideas regarding how the company/team can best harness the power of AI/LLM, and use it not only simplify operations within the team, but also to streamline the work of the research team in gathering/retrieving large sets of data. The role is that of a leader who sets a vision for the future of AI/LLM’s use within the team and the company. They think outside the box and are proactive in engaging with new technologies and developing new ideas for the team to move forward in the AI/LLM field. The candidate should also at least be willing to acquire some basic skills in scraping and data pipelining. Responsibilities: Develop methods to leverage the potential of LLM and AI within the team. Proactive at finding new solutions to engage the team with AI/LLM, and streamline processes in the team. Be a visionary with AI/LLM tools and predict how the use of future technologies could be harnessed early on so that when these technologies come out, the team is ahead of the game regarding how it could be used. Assist in acquiring and integrating data from various sources, including web crawling and API integration. Stay updated with emerging technologies and industry trends. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Contribute to cross-functional teams in understanding data requirements. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Data Science, or a related field. Higher qualifications is a plus. Think proactively and creatively regarding the next AI/LLM technologies and how to use them to the team’s and company’s benefits. “Think outside the box” mentality. Experience prompting LLMs in a streamlined way, taking into account how the LLM can potentially “hallucinate” and return wrong information. Experience building agentic AI platforms with modular capabilities and autonomous task execution. (crewai, lagchain, etc.) Proficient in implementing Retrieval-Augmented Generation (RAG) pipelines for dynamic knowledge integration. (chromadb, pinecone, etc) Experience managing a team of AI/LLM experts is a plus: this includes setting up goals and objectives for the team and fine-tuning complex models. Strong proficiency in Python programming Proficiency in SQL and data querying is a plus. Familiarity with web crawling techniques and API integration is a plus but not a must. Experience in AI/ML engineering and data extraction Experience with LLMs, NLP frameworks (spaCy, NLTK, Hugging Face, etc.) Strong understanding of machine learning frameworks (TensorFlow, PyTorch) Design and build AI models using LLMs Integrate LLM solutions with existing systems via APIs Collaborate with the team to implement and optimize AI solutions Monitor and improve model performance and accuracy Familiarity with Agile development methodologies is a plus. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Ability to work collaboratively in a team environment. Good and effective communication skills. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
4.0 years
0 Lacs
Vellore, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: TensorFlow, PyTorch, rag, LangChain Forbes Advisor is Looking for: Location - Remote (For candidate's from Chennai or Mumbai it's hybrid) Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Extraction Team is a brand-new team who plays a crucial role in our organization by designing, implementing, and overseeing advanced web scraping frameworks. Their core function involves creating and refining tools and methodologies to efficiently gather precise and meaningful data from a diverse range of digital platforms. Additionally, this team is tasked with constructing robust data pipelines and implementing Extract, Transform, Load (ETL) processes. These processes are essential for seamlessly transferring the harvested data into our data storage systems, ensuring its ready availability for analysis and utilization. A typical day in the life of a Data Research Engineer will involve coming up with ideas regarding how the company/team can best harness the power of AI/LLM, and use it not only simplify operations within the team, but also to streamline the work of the research team in gathering/retrieving large sets of data. The role is that of a leader who sets a vision for the future of AI/LLM’s use within the team and the company. They think outside the box and are proactive in engaging with new technologies and developing new ideas for the team to move forward in the AI/LLM field. The candidate should also at least be willing to acquire some basic skills in scraping and data pipelining. Responsibilities: Develop methods to leverage the potential of LLM and AI within the team. Proactive at finding new solutions to engage the team with AI/LLM, and streamline processes in the team. Be a visionary with AI/LLM tools and predict how the use of future technologies could be harnessed early on so that when these technologies come out, the team is ahead of the game regarding how it could be used. Assist in acquiring and integrating data from various sources, including web crawling and API integration. Stay updated with emerging technologies and industry trends. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Contribute to cross-functional teams in understanding data requirements. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Data Science, or a related field. Higher qualifications is a plus. Think proactively and creatively regarding the next AI/LLM technologies and how to use them to the team’s and company’s benefits. “Think outside the box” mentality. Experience prompting LLMs in a streamlined way, taking into account how the LLM can potentially “hallucinate” and return wrong information. Experience building agentic AI platforms with modular capabilities and autonomous task execution. (crewai, lagchain, etc.) Proficient in implementing Retrieval-Augmented Generation (RAG) pipelines for dynamic knowledge integration. (chromadb, pinecone, etc) Experience managing a team of AI/LLM experts is a plus: this includes setting up goals and objectives for the team and fine-tuning complex models. Strong proficiency in Python programming Proficiency in SQL and data querying is a plus. Familiarity with web crawling techniques and API integration is a plus but not a must. Experience in AI/ML engineering and data extraction Experience with LLMs, NLP frameworks (spaCy, NLTK, Hugging Face, etc.) Strong understanding of machine learning frameworks (TensorFlow, PyTorch) Design and build AI models using LLMs Integrate LLM solutions with existing systems via APIs Collaborate with the team to implement and optimize AI solutions Monitor and improve model performance and accuracy Familiarity with Agile development methodologies is a plus. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Ability to work collaboratively in a team environment. Good and effective communication skills. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
4.0 years
3 - 10 Lacs
Mohali
On-site
Job Description : Should have 4+ years hands-on experience in algorithms and implementation of analytics solutions in predictive analytics, text analytics and image analytics Should have handson experience in leading a team of data scientists, works closely with client’s technical team to plan, develop and execute on client requirements providing technical expertise and project leadership. Leads efforts to foster innovative ideas for developing high impact solutions. Evaluates and leads broad range of forward looking analytics initiatives, track emerging data science trends, and knowledge sharing Engaging key stakeholders to source, mine and validate data and findings and to confirm business logic and assumptions in order to draw conclusions. Helps in design and develop advanced analytic solutions across functional areas as per requirement/opportunities. Technical Role and Responsibilities Demonstrated strong capability in statistical/Mathematical modelling or Machine Learning or Artificial Intelligence Demonstrated skills in programming for implementation and deployment of algorithms preferably in Statistical/ML based programming languages in Python Sound Experience with traditional as well as modern statistical techniques, including Regression, Support Vector Machines, Regularization, Boosting, Random Forests, and other Ensemble Methods; Visualization tool experience - preferably with Tableau or Power BI Sound knowledge of ETL practices preferably spark in Data Bricks cloud big data technologies like AWS, Google, Microsoft, or Cloudera. Communicate complex quantitative analysis in a lucid, precise, clear and actionable insight. Developing new practices and methodologies using statistical methods, machine learning and predictive models under mentorship. Carrying out statistical and mathematical modelling, solving complex business problems and delivering innovative solutions using state of the art tools and cutting-edge technologies for big data & beyond. Preferred to have Bachelors/Masters in Statistics/Machine Learning/Data Science/Analytics Should be a Data Science Professional with a knack for solving problems using cutting-edge ML/DL techniques and implementing solutions leveraging cloud-based infrastructure. Should be strong in GCP, TensorFlow, Numpy, Pandas, Python, Auto ML, Big Query, Machine learning, Artificial intelligence, Deep Learning Exposure to below skills: Preferred Tech Skills : Python, Computer Vision,Machine Learning,RNN,Data Visualization,Natural Language Processing,Voice Modulation,Speech to text,Spicy,Lstm,Object Detection,Sklearn,Numpy, NLTk,Matplotlib,Cuinks, seaborn,Imageprocessing, NeuralNetwork,Yolo, DarkFlow,DarkNet,Pytorch, CNN,Tensorflow,Keras,Unet, ImageSegmentation,ModeNet OCR,OpenCV,Pandas,Scrapy, BeautifulSoup,LabelImg ,GIT. Machine Learning, Deep Learning, Computer Vision, Natural Language Processing,Statistics Programming Languages-Python Libraries & Software Packages- Tensorflow, Keras, OpenCV, Pillow, Scikit-Learn, Flask, Numpy, Pandas, Matplotlib,Docker Cloud Services- Compute Engine, GCP AI Platform, Cloud Storage, GCP AI & MLAPIs Job Types: Full-time, Permanent, Fresher Pay: ₹30,000.00 - ₹90,000.00 per month Education: Bachelor's (Preferred) Experience: AI/Machine learining: 4 years (Preferred) Work Location: In person
Posted 3 weeks ago
4.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough