Jobs
Interviews

4556 Numpy Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

14 - 17 Lacs

Hyderabad

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Senior Data Scientist+Instructor to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly Detection. Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. Contribute to the academic and research environment of the department and the university. Required Qualifications A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. Demonstrable 3-10 years of expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. Excellent communication and interpersonal skills. Preferred Qualifications A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits Competitive salary packages aligned with industry standards. Access to state-of-the-art labs and classroom facilities. To know more about us, feel free to explore our website: Newton School of Technology. We look forward to the possibility of having you join our academic team and help shape the future of tech education!

Posted 1 week ago

Apply

5.0 - 7.0 years

20 - 30 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Senior Data Scientist with experience in pricing optimization, pricing elasticity, and AWS SageMaker. The ideal candidate will have a strong foundation in Statistics and Machine Learning, with a particular focus on Bayesian modeling. As part of our Data Science team, you will work closely with clients to develop advanced pricing strategies using state-of-the-art tools and techniques, including AWS SageMaker, to optimize business outcomes. Key Responsibilities: Lead and contribute to the development of pricing optimization models, leveraging statistical and machine learning techniques to inform strategic decisions. Analyze pricing elasticity to predict consumer response to changes in price, helping clients maximize revenue and market share. Implement and deploy machine learning models using AWS SageMaker for scalable and efficient performance in a cloud environment. Utilize Bayesian modeling to support decision-making processes, providing insights into uncertainty and model predictions. Collaborate with cross-functional teams to integrate data-driven insights into business processes. Communicate complex results and findings in a clear and concise manner to both technical and non-technical stakeholders. Continuously explore and experiment with new modeling approaches and tools to improve accuracy and efficiency of pricing solutions. Qualifications Bachelors or Masters degree in Data Science, Statistics Mathematics Economics, or a related field. Advanced degrees preferred. 5+ years of hands-on experience in data science, with a focus on pricing optimization and elasticity modeling. Expertise in Bayesian modeling and machine learning techniques. Proven experience working with AWS SageMaker for model development, deployment, and monitoring. Familiarity with AWS Certified Data Analytics Specialty certification is a plus. Strong programming skills in Python (preferred) or R. Experience with cloud platforms (AWS preferred), including SageMaker. Proficiency in statistical analysis tools and libraries (e.g., NumPy, Pandas, PyMC3, or similar) Excellent problem-solving and analytical thinking skills. Ability to work in a fast-paced environment and manage multiple projects. Strong communication skills with the ability to explain complex concepts to non-technical audiences. Preferred Qualifications : Experience with A/B testing, econometrics, or other statistical experimentation methods. Familiarity with other cloud computing platforms (e.g., Azure, GCP). Experience working in cross-functional teams and client-facing roles. Additional Information Opportunity to work with cutting-edge technology in a dynamic environment. Exposure to a diverse range of industries and projects. Collaborative and inclusive work culture with opportunities for growth and professional development.

Posted 1 week ago

Apply

11.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking an experienced AI/ML Engineer to design and implement cutting-edge solutions in the field of Generative AI and Large Language Models (LLMs). This role involves leading the development of intelligent agents, prompt optimization, and scalable AI pipelines, while mentoring team members and driving innovation in applied AI. You will work at the intersection of machine learning, prompt engineering, and production-grade AI orchestration, contributing to the deployment of next-gen cognitive systems across telecom and enterprise environments. Key Responsibilities: AI/ML Development: • Design and develop production-ready AI/ML solutions using LLMs and Generative AI models. • Fine-tune and optimize foundation models for domain-specific use cases. • Architect and deploy Retrieval-Augmented Generation (RAG) pipelines and multi-agent systems. Agentic AI & Prompt Engineering: • Build intelligent AI agents using frameworks such as LangChain, CrewAI, AutoGen. • Engineer effective prompts for complex language tasks and continuous learning environments. • Orchestrate autonomous decision-making pipelines using agent-based design. Model Lifecycle & MLOps: • Implement best practices for responsible AI, model explainability, and fairness. • Own model deployment, monitoring, optimization, and cost control. • Work with MLOps tools for automation and CI/CD in model delivery pipelines. Data Engineering & Preparation: • Handle advanced data preprocessing, feature engineering, and quality checks. • Leverage tools like Pandas, Numpy, Polars for exploration and transformation. • Integrate with vector databases such as Pinecone, ChromaDB, Weaviate for semantic search applications. What You’ll Bring: • 7–11 years of experience as an AI/ML Engineer or Data Scientist with a proven track record in delivering ML solutions to production. • Expertise in: - ML/DL frameworks: TensorFlow, PyTorch, Keras, Sklearn - LLM ecosystems: LangChain, LlamaIndex, CrewAI - Foundation models, RAG pipelines, and agentic AI systems • Strong programming in Python (R is a plus) • Deep knowledge of: - ML/DL algorithms and probabilistic modeling - Statistics, feature engineering, and data wrangling - AI governance and ethical development practices • Hands-on experience in: - Prompt engineering and LLM fine-tuning - Vector databases and memory integration - Model performance tuning, compression, and deployment (ONNX, TorchScript, quantization) Why Join Us? • Impactful Work: Play a key role in building scalable AI solutions that power real-time telecom, messaging, and enterprise platforms. • Tremendous Growth Opportunities: Be part of a dynamic, fast-scaling AI team solving real-world problems. • Innovative Environment: Collaborate with passionate engineers and researchers in a culture that celebrates experimentation, learning, and innovation. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive workplace for all.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role – Senior Python Developer (Data Science, AI/ML) HCL Software (hcl-software.com) delivers software that fulfils the transformative needs of clients around the world. We build award winning software across AI, Automation, Data & Analytics, Security and Cloud. The HCL Unica+ Marketing Platform enables our customers to deliver precision and high performance Marketing campaigns across multiple channels like Social Media, AdTech Platforms, Mobile Applications, Websites, etc. The Unica+ Marketing Platform is a Data and AI first platform that enables our clients to deliver hyper-personalized offers and messages for customer acquisition, product awareness and retention. We are seeking a Senior Python Developer with strong Data Science and Machine Learning skills and experience to deliver AI driven Marketing Campaigns. Responsibilities 1. Python Programming & Libraries: Proficient in Python with extensive experience using Pandas for data manipulation, NumPy for numerical operations, and Matplotlib/Seaborn for data visualization. 2. Statistical Analysis & Modelling: Strong understanding of statistical concepts, including descriptive statistics, inferential statistics, hypothesis testing, regression analysis, and time series analysis. 3. Data Cleaning & Preprocessing: Expertise in handling messy real-world data, including dealing with missing values, outliers, data normalization/standardization, feature engineering, and data transformation. 4. SQL & Database Management: Ability to query and manage data efficiently from relational databases using SQL, and ideally some familiarity with NoSQL databases. 5. Exploratory Data Analysis (EDA): Skill in visually and numerically exploring datasets to understand their characteristics, identify patterns, anomalies, and relationships. 6. Machine Learning Algorithms: In-depth knowledge and practical experience with a wide range of ML algorithms such as linear models, tree-based models (Random Forests, Gradient Boosting), SVMs, K-means, and dimensionality reduction techniques (PCA). 7. Deep Learning Frameworks: Proficiency with at least one major deep learning framework like TensorFlow or PyTorch. This includes understanding neural network architectures (CNNs, RNNs, Transformers) and their application to various problems. 8. Model Evaluation & Optimization: Ability to select appropriate evaluation metrics (e.g., precision, recall, F1-score, AUC-ROC, RMSE) for different problem types, diagnose model performance issues (bias-variance trade-off), and apply optimization techniques. 9. Deployment & MLOps Concepts: Understanding of how to deploy machine learning models into production environments, including concepts of API creation, containerization (Docker), version control for models, and monitoring. Qualifications & Skills 1. At least 8-10 years. of Python Development Experience with at least 4 years in data science and machine learning 2. Experience with Customer Data Platforms (CDP) like TreasureData, Epsilon, Tealium, Adobe, Salesforce is advantageous. 3. Experience with AWS SageMaker is advantegous 4. Experience with LangChain, RAG for Generative AI is advantageous. 5. Expertise in Integration tools and frameworks like Postman, Swagger, API Gateways 6. Knowledge of REST, JSON, XML, SOAP is a must 7. Ability to work well within an agile team environment and applying the related working methods. 8. Excellent communication & interpersonal skills 9. A 4-year degree in Computer Science or IT is a must. Travel: 30% +/- travel required Location: India (Pune preferred) Compensation: Base salary, plus bonus

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Key Responsibilities: Design and Implement RAG-Based Solutions: Lead the development of a robust RAG-based system that seamlessly integrates generative AI models with retrieval mechanisms, ensuring optimal accuracy, performance, and scalability. Agent-Based Solution Development : Build AI agents that can autonomously perform tasks using information retrieval, language models, and multi-turn interactions using AutoGen, LangGraph frameworks. Generative AI Integration : Leveraging LLMs (e.g., GPT, Anthropic, Llama, Deepseek etc.) for generating high-quality, contextually accurate content in response to user queries. Prompt Engineering: Strong proficiency in prompt engineering to design, test, and optimize prompts for large language models, ensuring effective agent behavior and task completion Data & Knowledge Integration : Work on integrating structured and unstructured data sources into the RAG framework to improve model outputs. Collaborate with Teams : Work closely with product, engineering, and data science teams to ensure seamless integration of AI capabilities into broader products. Security & Compliance : Ensure following best practices in data security, privacy, and compliance, particularly in AI-driven applications. Model Evaluation : Implement strategies for continuous evaluation, monitoring, and tuning of models to ensure high-quality outputs. Data Pipeline Design : Build robust data pipelines for preprocessing and integrating structured and unstructured data sources into the platform. Collaborate with Teams : Work closely with data engineers, and software engineers to ensure seamless integration of machine learning models into production environments. Performance Optimization : Implement and experiment with various techniques such as vector search , semantic search , and knowledge distillation to enhance the platforms efficiency and accuracy. Data Quality : Ensure data quality and consistency throughout the data pipeline, implementing necessary quality checks and validation processes. Requirements: 4+ years of experience with building GenAI solutions using Python and Langchain and AutoGen technologies. Agentic AI experienced developing agents using AutoGen and LangGraph frameworks. Prior experience in building GenAI solutions using effective text representation techniques and classification algorithms. Experience with ML – Model Building, Semantic extraction techniques, data structures and modelling Experience with Supervised/Unsupervised Learning, Sentiment Analysis, Statistical Analysis using NLP and Python libraries (NLTK, Spacy, Numpy, Pandas) Experienced in building Rest API’s Experience with building applications with micro services & serverless architecture Excellent communication skills Good to have: Experience with Azure cloud services Experience with Azure Foundry & Prompt flow

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Requirements Role/Job Title: Data Scientist Function/Department: Data & Analytics Job Purpose The position is in the Fraud & Operational Analytics Team. This team is mainly responsible for creating Machine Learning/ AI solutions for effective fraud prevention and driving operational scalability to improve customer experience, reduce operational cost, risk mitigation, etc. The role will focus on building cutting edge ML solutions. Roles & Responsibilities Analyze large amounts of data to derive business insights and create innovative solutions. Understanding business nuances and associated fraud patterns. Develop advanced ML models for fraud identification. Innovate with a focus on better and newer approaches. Explore alternate data sources which can add value on top of traditional data sources. The role requires exhibiting a high level of expertise in data strategy and model training pipelines. S/He will drive insights in generating data driven actionable strategies. Supporting the business and risk teams with bespoke and strategic analysis Key Skills Proficiency and experience in econometric, statistical and machine learning techniques. Proficiency in Python, SQL, Pandas, NumPy, Sklearn, TensorFlow/PyTorch etc Strong understanding of statistical concepts and modelling techniques for regression, classification, and anomaly detection. Good understanding of evaluation metrics Logical thought process, ability to scope out an open-ended problem into data driven solution. Education Qualification Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA) Experience Range: 2 to 5 years

Posted 1 week ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Location: Gandhinagar, Gujarat (On-site) Experience Required: Minimum 2 Years Employment Type: Full-Time About Zignuts Technolab Zignuts Technolab Pvt. Ltd. is a leading digital product development company delivering robust and scalable solutions to global clients. With a strong focus on innovation, collaboration, and quality engineering, we build future-ready software across various industries including SaaS, AI, FinTech, and enterprise platforms. Role Overview We are seeking a Python Developer with at least 2 years of hands-on experience in Python and modern backend frameworks. You will be responsible for building scalable APIs, integrating with AI components, and contributing to backend systems for various web and AI-powered products. This is a growth-oriented opportunity suited for developers who want to expand their expertise across cloud platforms, AI integration, and high-performance architectures. Key Responsibilities Develop and maintain backend services using Python frameworks such as Django, Flask, and FastAPI. Build and consume RESTful APIs and ensure their performance, reliability, and scalability. Integrate with AI/ML models, including foundational models like GPT, Claude, Gemini, LLaMA, etc. Design and work with large-scale data processing pipelines using libraries like Pandas, NumPy, or Dask. Collaborate with frontend, QA, and DevOps teams for seamless end-to-end delivery. Write clean, modular, testable code following industry best practices. Troubleshoot, debug, and optimize backend services and queries. Deploy and manage applications on AWS, GCP, or Azure infrastructure (basic exposure expected). Maintain technical documentation and follow version control standards using Git. Required Skills and Qualifications Minimum 2 years of professional experience as a Python Developer. Good English communication skills (written and verbal). Strong problem-solving mindset and willingness to learn continuously. Proficient in Python programming, including OOP, data structures, and design patterns. Strong hands-on experience with at least one major Python framework: Django, Flask, or FastAPI. Familiarity with integrating or consuming AI/ML models (including GPT, Claude, LLaMA, etc.). Knowledge of data manipulation and analysis using Pandas, NumPy, etc. Familiar with API security, JWT authentication, rate limiting, and API testing. Experience with SQL and NoSQL databases (e.g., PostgreSQL, MySQL). Understanding of frontend technologies (HTML, CSS, JavaScript) for API integration. Exposure to Linux environments and bash scripting. Knowledge of cloud services like AWS, GCP, or Azure (deployment, storage, etc.). Comfortable using Git for version control and managing codebases. Preferred (Nice-to-Have) Skills Experience integrating or consuming foundational AI models using OpenAI, Hugging Face, or similar APIs. Familiarity with Celery, RabbitMQ, or asynchronous tasks. Exposure to Docker. Contributions to open-source projects or technical blogging. Understanding of MLOps, model lifecycle, or cloud AI services. What We Offer Opportunity to work on innovative projects involving AI, data engineering, and cloud-native platforms. Learning-driven environment with access to technical mentorship and skill-building tracks. Flexible work structure and transparent team communication. How to Apply Ready to take the next step in your career? We'd love to hear from you. Apply Now! Feel free to share this post with anyone you think would be a great fit! Send your resume and any relevant GitHub/project links to: talent@zignuts.com

Posted 1 week ago

Apply

2.0 years

1 - 4 Lacs

Chandigarh

On-site

Job Title: Python Tutor / Trainer Company: CBitss Technologies Location: Sec 34A Experience Required: Minimum 2 Years Joining: Immediate Note: Freshers are not eligible for this role. About Us: CBitss Technologies is a government-recognized training institute with over 20 years of excellence in professional and technical education. We specialize in delivering industry-relevant training programs and are committed to empowering students with real-world skills. Job Description: We are looking for a Python Tutor/Trainer with a passion for teaching and solid hands-on experience in Python programming. The ideal candidate should have at least 2 years of experience and be able to deliver engaging and practical training sessions to students from beginner to advanced levels. Key Responsibilities: Conduct classroom and online training sessions on Python programming Develop and update course materials, projects, and assessments Teach Python basics, OOPs, file handling, libraries (NumPy, Pandas, etc.), and frameworks (Flask/Django is a plus) Guide students on hands-on coding assignments and real-world projects Provide individual support and doubt-clearing sessions Stay updated with latest trends and technologies in Python and software development Requirements: Minimum 2 years of experience in Python programming and training Strong knowledge of core Python concepts and libraries Experience with project-based teaching and real-time problem-solving Excellent communication and presentation skills Immediate availability to join Preferred Skills: Knowledge of web frameworks like Flask or Django Familiarity with version control (Git) and development tools Prior experience in curriculum design or online training platforms is a plus Why Join Us? Work with a reputable, government-recognized institute Supportive and growth-oriented environment Opportunity to train future-ready professionals Competitive salary package #PythonTrainer #PythonTutor #PythonDeveloper #LearnPython #PythonProgramming #PythonJobs #ProgrammingTrainer #CBitssTechnologies #JoinCBitss #GovtRecognizedInstitute #ProfessionalTraining #TrainingInstitute Job Type: Full-time Pay: ₹15,339.25 - ₹40,597.06 per month Benefits: Cell phone reimbursement Commuter assistance Internet reimbursement Ability to commute/relocate: Chandigarh, Chandigarh: Reliably commute or planning to relocate before starting work (Required) Experience: Python: 2 years (Required) Location: Chandigarh, Chandigarh (Required) Work Location: In person

Posted 1 week ago

Apply

1.0 - 2.0 years

1 - 3 Lacs

Cochin

On-site

Job Summary We are looking for a dynamic and passionate C# Trainer who can deliver engaging and practical training sessions for students or professionals. The ideal candidate should have solid hands-on experience in C# and familiarity with Python programming to support introductory sessions. Key Responsibilities Deliver classroom and/or online training sessions in C# and basics of Python. Create and update training content, modules, and assessments. Provide hands-on demonstrations and project-based learning. Assess learners' performance and provide constructive feedback. Stay updated with the latest developments in C# and Python. Assist in curriculum development and suggest improvements. Support and mentor learners through projects or assignments. Key Requirements Bachelor’s degree in Computer Science, IT, or related field. 1–2 years of experience in C# programming and training/facilitation. Working knowledge of Python (loops, functions, file handling, etc.). Good understanding of OOP concepts, exception handling, collections in C#. Strong communication and presentation skills. Passion for teaching and the ability to engage learners. Preferred Skills Knowledge of Visual Studio, Git, or any source control system. Exposure to Python libraries like NumPy or Pandas is a plus. Previous experience in edtech or as a coding instructor is a bonus. Job Type: Full-time Pay: ₹15,000.00 - ₹25,000.00 per month Language: English (Preferred) Work Location: In person

Posted 1 week ago

Apply

2.0 years

8 - 10 Lacs

Hyderābād

On-site

FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate , serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Your Team’s Impact The News Content team is responsible for ingesting, maintaining and delivering a variety of unstructured text-based content sets for use across all FactSet applications and workflows. These content sets need to be processed and delivered to our clients in real-time. New feed integration involves working with Product Development to understand the requirements of the feeds as well as working with the vendor to understand the technical specification for ingesting the feeds. Work on existing feeds includes bug fixes, feature enhancements, infrastructure improvements, maintaining data quality, and ensuring feeds are operating properly throughout the day. You will work on both internal and external client-facing applications that shape the user's experience and drive FactSet's growth through technological innovations. What You’II Do FactSet is seeking for a Python Developer along with experienced AWS Developer to join our engineering team responsible for making our product more scalable and reliable. Deliver high quality, re-usable and maintainable code, perform Unit / Integration Testing of assigned tasks with in the estimated timelines. Build robust infrastructure in AWS appropriate to respective component of product Proactive in providing Technical solutions with effective communication & collaboration skills. Perform Code Reviews and ensure best practices are followed. Work in agile team environment and collaborate with internal teams to ensure smooth product delivery. Ownership on end-end product & Individual contribution. Ensure high stability of product. Continuous Knowledge sharing internal & external of Team. What We’re Looking For Bachelor or master’s degree in computer science 2-3 years of Total experience. Minimum 2 year of experience in Python development. Minimum of 2 years working experience in Linux/Unix environments Strong analytical and problem-solving skills. Strong experience and proficiency with Python, Pandas, Numpy. Experience with AWS components. Experience with Github-based development processes Excellent written and verbal communication skills. Organized, self-directed, and resourceful with the ability to appropriately prioritize work in a fast-paced environment Good to have skills: Familiar with Agile software development (Scrum is a plus). Experience in front end development Experience in database development Experience in C++ development Exposure on design patterns What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . Ex US: At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law . Company Overview: FactSet ( NYSE:FDS | NASDAQ:FDS ) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.

Posted 1 week ago

Apply

1.0 years

5 - 8 Lacs

Bhubaneshwar

On-site

Company Introduction iServeU is a modern banking infrastructure provider in APAC region, empowering financial enterprises with embedded fintech solutions for their customers. iServeU is one of the few certified partners with National Payment Corporation of India (NPCI), VISA for various products. iServeU also provides a cloud-native, micro services-enabled, distributed platform with over 5000 possible product configurations with a low code/no code interface to banks, NBFCs, Fintech, and other regulated entities. - We process around 2500 transactions per second by levering distributed & auto scale technology like K8. - Our core platform combines of 1200+ micro services. - Our customer list includes Fintech start-ups, top tier private banks to PSU bank. We operate in five countries and help customers constantly change the way financial institutions operate and innovate. - Our solutions currently empowers over 20 banks and 250+ enterprises across India and abroad. - Our platform seamlessly manages the entire transaction lifecycle, including withdrawals, deposits, transfers, payments, and lending through various channels like digital, branch, agents. Our team of 500+ employees, with over 80% in technology roles is spread across offices in Bhubaneswar, Bangalore and Delhi. We have raised $8 million in funding to support our growth and innovation. For more details visit: www.iserveu.in Job Position : Research Assistant Location: Bhubaneswar Reports To: CTO Job Summary We are seeking a highly motivated and detail-oriented Research Assistant to support our ongoing research projects. The successful candidate will play a crucial role in various stages of the research lifecycle, from data collection and analysis to literature review and administrative support. This position is ideal for an organized individual with strong analytical skills and a passion for [specific research area, if applicable, e.g., artificial intelligence, data science, embedded systems, cybersecurity, cryptography, blockchain]. Key Responsibilities • Literature Review & Information Gathering: o Conduct comprehensive literature searches using various databases and resources. o Summarize, synthesize, and critically evaluate relevant research papers, articles, and reports. o Maintain an organized database of research materials and citations. • Data Collection & Management: o Assist in the design and development of data collection instruments (e.g., surveys, experimental protocols, data pipelines). o Collect, organize, and manage research data, ensuring accuracy, completeness, and adherence to ethical guidelines. o Maintain meticulous records of research activities and data sources. o [If applicable: Develop scripts for data extraction, perform data cleaning, manage large datasets.] • Data Analysis & Interpretation: o Assist with preliminary data analysis using statistical software (e.g., R, Python, MATLAB) or specialized computational tools. o Generate tables, charts, and graphs to visualize data findings. o Contribute to the interpretation of results and identification of key insights. o Summarize research papers for easy consumption. • Programming & Prototyping: o Assist in developing and testing software prototypes, algorithms, or models relevant to the research. o Write clean, well-documented, and efficient code. o Debug and troubleshoot technical issues. • Report Writing & Dissemination: o Assist in drafting, editing, and formatting research reports, presentations, and manuscripts. o Prepare summaries of findings for internal and external communication. o Ensure all written materials adhere to academic/professional standards and guidelines. • Project Coordination & Administrative Support: o Assist with the day-to-day coordination of research projects, including scheduling meetings, managing timelines, and tracking progress. o Handle general administrative tasks as needed to support the research team. o Ensure compliance with all relevant research protocols, ethical guidelines, and institutional policies. Requirements Required Qualifications • Bachelor's degree in Computer Science, Software Engineering, Data Science, or a closely related technical field. • 1-2 years of experience in a research setting (can include academic projects, internships, or previous research assistant roles). • Less than 28 years of age, candidates enrolled in PhD programs can apply. • Strong analytical and critical thinking skills. • Excellent written and verbal communication skills. • Proficiency in Microsoft Office Suite (Word, Excel, PowerPoint). • High level of attention to detail and accuracy in data handling and record-keeping. • Ability to work independently and collaboratively within a team environment. • Strong organizational and time management skills, with the ability to manage multiple tasks simultaneously. • Demonstrated ability to learn new software and research methodologies quickly. Preferred Qualifications • Experience with programming languages commonly used in research (e.g., Python, Java, C++). • Familiarity with data analysis libraries (e.g., Pandas, NumPy, SciPy) or machine learning frameworks (e.g., TensorFlow, PyTorch). • Experience with version control systems (e.g., Git). • Familiarity with research ethics and human/animal subject protection protocols (IRB/IACUC). • Experience in computational modeling, algorithm design, data mining, machine learning, or software development for research. • Prior experience with cloud computing platforms (e.g., AWS, GCP, Azure) for research purposes or specialized research software/tools. • A strong interest in [specific sub-field of research, e.g., Artificial Intelligence, Machine Learning, Cybersecurity, Human-Computer Interaction, Data Science, Computer Vision, Natural Language Processing].

Posted 1 week ago

Apply

2.0 years

0 Lacs

India

On-site

Job Title: Data Science Trainer Company: HERE AND NOW Artificial Intelligence Research Institute Location: HERE AND NOW AI, Salem About Us At the HERE AND NOW Artificial Intelligence Research Institute, we are at the forefront of AI innovation. Our mission is to empower the next generation of AI professionals through comprehensive education, innovative AI applications, and groundbreaking research. We are looking for a passionate and experienced Data Science Trainer to join our team and help us achieve our goals. Job Description Title: Data Science Trainer Location: Salem, Tamil Nadu Job Type: Part-Time / Contract Date of Training: 28.07.2025 Experience: Minimum 2 years in Data Science, Machine Learning, or AI domain Industry: IT Training / EdTech / Technical Education About the Role We are hiring a dedicated and skilled Data Science Trainer in Salem to deliver hands-on training in Python for Data Science, Machine Learning, Big Data Analytics, and Deep Learning. If you're passionate about teaching and mentoring aspiring data scientists, this is your chance to contribute to the AI revolution. Responsibilities Deliver interactive classroom or virtual sessions covering: Data Science: Python, Pandas, NumPy, Matplotlib, Statistics, Machine Learning (Supervised & Unsupervised), Model Evaluation, Real-world Projects. Big Data Analytics: Hadoop Ecosystem (HDFS, MapReduce, Hive, Pig, Sqoop, Flume), Apache Spark, Spark SQL, Spark MLlib, NoSQL basics (MongoDB/Cassandra). Deep Learning: Neural Networks, CNN, RNN, LSTM, GANs, using TensorFlow, Keras, and Google Colab. Design and customize curriculum for beginner to intermediate learners. Facilitate real-time mini-projects, assignments, and model-building activities. Evaluate student progress and provide mentorship. Collaborate with academic and placement teams to ensure outcomes align with industry needs. Required Skills Strong understanding of Python and ML libraries (Pandas, Scikit-learn, Matplotlib, Seaborn). Proficiency in Big Data tools: Hadoop, Spark, Hive. Familiarity with Deep Learning frameworks: TensorFlow, Keras. Understanding of statistical concepts and machine learning algorithms. Excellent communication, presentation, and mentoring skills. Willingness to conduct sessions at our Salem center. Preferred Qualifications B.E./B.Tech/MCA/M.Sc in Computer Science, Data Science, or related fields. Training experience in Data Science, Big Data, or AI. Certification in Data Science/Machine Learning/Big Data preferred. Exposure to cloud tools (AWS/GCP) and BI tools (Power BI/Tableau) is a plus. Benefits Competitive salary + performance-based incentives Opportunity to be part of a growing AI research and training community Certificate of contribution for each training batch Job Types: Part-time, Fresher, Contractual / Temporary, Freelance Benefits: Flexible schedule Food provided Paid sick time Schedule: Day shift Supplemental Pay: Commission pay Performance bonus Shift allowance Experience: total work: 2 years (Required) Work Location: In person

Posted 1 week ago

Apply

5.0 years

8 - 15 Lacs

Chennai

On-site

eGrove Systems is looking for Senior Python Developer to join its team of experts. Skill : Senior Python Developer Exp : 5+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers. Job Type: Full-time Pay: ₹800,000.00 - ₹1,500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift

Posted 1 week ago

Apply

0 years

1 - 6 Lacs

Chennai

Remote

We are looking for a highly skilled and experienced Machine Learning / AI Engineer to join our team at Zenardy. The ideal candidate needs to have a proven track record of building, deploying, and optimizing machine learning models in real-world applications. You will be responsible for designing scalable ML systems, collaborating with cross-functional teams, and driving innovation through AI-powered solutions. Key Responsibilities: Design, develop, and deploy machine learning models to solve complex business problems. Work across the full ML lifecycle: data collection, preprocessing, model training, evaluation, deployment, and monitoring. Collaborate with data engineers, product managers, and software engineers to integrate ML models into production systems. Conduct research and stay up-to-date with the latest ML/AI advancements, applying them where appropriate. Optimize models for performance, scalability, and robustness. Document methodologies, experiments, and findings clearly for both technical and non-technical audiences. Mentor junior ML engineers or data scientists as needed. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or related field (Ph.D. is a plus). Minimum of 5 hands-on ML/AI projects, preferably in production or with real-world datasets. Proficiency in Python and ML libraries/frameworks like TensorFlow, PyTorch, Scikit-learn, XGBoost. Solid understanding of core ML concepts: supervised/unsupervised learning, neural networks, NLP, computer vision, etc. Experience with model deployment using APIs, containers (Docker), cloud platforms (AWS/GCP/Azure). Strong data manipulation and analysis skills using Pandas, NumPy, and SQL. Knowledge of software engineering best practices: version control (Git), CI/CD, unit testing. Preferred Skills: Experience with MLOps tools (MLflow, Kubeflow, SageMaker, etc.). Familiarity with big data technologies like Spark, Hadoop, or distributed training frameworks. Experience working in Fintech environments would be a plus. Strong problem-solving mindset with excellent communication skills. Experience in working with vector database. Understanding of RAG vs Fine-tuning vs Prompt Engineering. Why Join Us: Work on impactful, real-world AI challenges. Collaborate with a passionate and innovative team. Opportunities for career advancement and learning. Flexible work environment (remote/hybrid options). Competitive compensation and benefits. Job Category: Machine Learning Job Type: Full Time Job Location: Chennai

Posted 1 week ago

Apply

1.0 years

1 - 4 Lacs

India

On-site

Job Title: AI Developer – Computer Vision & ML Location: Coimbatore Company: Vpm Info Tech Experience: 1-3 years (Freshers with strong project portfolios may apply) Employment Type: Full-Time Job Description: We are seeking a skilled AI Developer with hands-on experience in computer vision and machine learning to develop intelligent surveillance systems. The role involves implementing and optimizing modules such as face detection for attendance , vehicle and number plate recognition , people counting , and motion detection using modern AI frameworks and libraries. Key Responsibilities: Face Detection + Attendance System: Build real-time face recognition systems using OpenCV, face_recognition, dlib, and manage attendance logs with SQLite3. Vehicle Detection + Number Plate Recognition: Develop vehicle detection and number plate OCR using Ultralytics (YOLOv8), EasyOCR, PyTesseract, OpenCV, and store data in MySQL. People Counting System: Implement people counting with tracking using YOLOv8, Deep SORT, Flask, OpenCV, and NumPy.Create motion detection modules using OpenCV, imutils, and NumPy.Integrate Google Cloud Vision API and custom OCR logic for advanced document/image analysis. Motion Detection: Cloud & OCR Integration: Collaborate with the backend team to ensure smooth data flow and API integrations. Optimize models and pipelines for real-time edge deployment (Raspberry Pi, Jetson Nano preferred). Required Skills: Proficiency in Python with libraries like OpenCV, NumPy, Flask, imutils, and threading. Experience with deep learning frameworks: Ultralytics YOLO, face_recognition, dlib. OCR tools: EasyOCR, Tesseract, Google Cloud Vision API. Databases: SQLite3, MySQL. Strong understanding of computer vision algorithms and real-time video processing. Preferred Qualifications: B.E/B.Tech/M.Sc in Computer Science, AI, Data Science, or related field. Knowledge of Docker, Git, and RESTful API design. Job Type: Full-time Pay: ₹15,000.00 - ₹35,000.00 per month Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

📊 Data Analyst – Excel, SQL & Python Location: Remote / Bangalore 🏢 About Us There are thousands of importers globally who struggle to source from India seamlessly — and there’s no dedicated software built specifically for the cross-border B2B trade industry. We are changing that. We are empowering importers and exporters with a SaaS-led marketplace that streamlines cataloging, ordering, picking, documentation, procurement, logistics, and payments — all in one platform. 🎯 Our Mission To digitize and optimize cross-border trade by providing an AI-powered platform. We equip importers and exporters with intelligent tools for order management, procurement, and financing — reducing inefficiencies, increasing profitability, and unlocking new growth opportunities. Through AI, automation, data intelligence, and seamless integrations, we’re building a holistic global commerce engine that helps businesses grow beyond borders. 🌍 Our Vision As India marches toward becoming a Viksit Bharat by 2047, exports will play a defining role. We envision a future where businesses of all sizes can seamlessly connect with global buyers — unlocking growth, prosperity, and national pride. 📌 What You’ll Do Work with large data sets in Excel, SQL databases, and Python Perform data cleaning, transformation, and statistical analysis Build predictive analytics models and dashboards Assist with reporting for operations, finance, and marketing teams Present insights and trends that drive business decisions ✅ Requirements 3rd / Final Year Engineering student from a top-tier college (IIT / NIT or equivalent) can also apply Prior experience working with Microsoft Excel and SQL (projects or internships) Strong understanding of DBMS fundamentals Basic to intermediate Python skills (especially in Pandas, NumPy, Matplotlib/Seaborn) Analytical mindset and attention to detail Self-driven and eager to solve real-world business problems ✨ Bonus (Good to Have) Exposure to tools like Power BI / Tableau Experience with forecasting or time-series models Previous internship in analytics or data science domain Final Year candidate required Must be knowing advance excel and sql and worked in large data sets. 📩 How to Apply Send your updated resume along with your work at care@prosessed.com (Subject: Application – Data Analyst Intern )

Posted 1 week ago

Apply

0.0 - 1.0 years

5 - 6 Lacs

Ahmedabad

On-site

Position - 02 Job Location - Ahmedabad Qualification - Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field, Relevant certifications or course completion (Coursera, edX, etc.) will be an advantage Years of Exp - 0 to 1 year About us Bytes Technolab is a full-range web application Development Company, establishedin the year 2011, having international presence in the USA and Australia and India. Bytes exhibiting excellent craftsmanship in innovative web development, eCommerce solutions, and mobile application development services ever since its inception. Roles & responsibilities Support development and fine-tuning of Large Language Models (LLMs) using open-source or proprietary models (e.g., OpenAI, HuggingFace, LLaMA). Build and optimize Computer Vision pipelines for tasks such as object detection, image classification, and OCR. Design and implement data preprocessing pipelines, including handling structured and unstructured data. Assist in training, evaluation, and deployment of ML/DL models in staging or production environments. Write clean, scalable, and well-documented code for research and experimentation purposes. Collaborate with senior data scientists, ML engineers, and product teams on AI projects. Skills required Strong foundation in Neural Networks, Deep Learning, and ML algorithms. Hands-on experience with Python and libraries such as TensorFlow, PyTorch, OpenCV, or HuggingFace Transformers. Familiarity with LLM architectures (e.g., GPT, BERT, LLaMA) and their fine-tuning or inference techniques. Basic understanding of Computer Vision concepts and real-world use cases. Experience working with data pipelines and handling large datasets (e.g., Pandas, NumPy, data loaders). Knowledge of model evaluation techniques and metrics. Good to Have Experience using Hugging Face, LangChain, OpenCV, or YOLO for CV tasks. Familiarity with Prompt Engineering and Retrieval-Augmented Generation (RAG). Understanding of NLP concepts such as tokenization, embeddings, and vector search. Exposure to ML model deployment (Flask, FastAPI, Streamlit, or AWS/GCP/Azure). Participation in ML competitions (e.g., Kaggle) or personal projects on GitHub. Soft Skills Strong analytical and problem-solving skills. Willingness to learn and work in a collaborative team environment. Good communication and documentation habits.

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: About US At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview ARQ supports global businesses of the Bank with solutions requiring judgment application, sound business understanding and analytical perspective. The domain experience in the areas of Financial Research & Analysis, Quantitative Modeling, Risk Management and Prospecting Support provide solutions for revenue enhancement, risk mitigation and cost optimization. The division comprising of highly qualified associates operates from three locations i.e. Mumbai, GIFT City, Gurugram and Hyderabad. Job Description Individual should be capable of running technical processes relating to execution of models across an Enterprise portfolio. This will involve familiarity with technical infrastructure (specifically GCP and Quartz), coding languages and the model development and software development lifecycle. In addition, there is opportunity for the right candidates to support the implementation of new processes into target-state, as well as explore ways to make the processes more efficient and robust. Specifically: Manage model execution, results analysis and reporting related to AMGQS models. The Analyst will also work with the implementation team to ensure that this critical function is well controlled Responsibilities Write Python and/or PySpark code to automate production processes of several risk and loss measurement statistical models. Example of model execution production processes are error attribution, scenario shock, sensitivity, result publication and reporting. Leverage skills in quantitative methods to conduct ongoing monitoring of model performance. Also, possess capabilities in data science and data visualization techniques and tools Identify, analyze, monitor, and present risk factors and metrics to, and integrate with, business partners Proactively solve challenges with process design deficiencies, implementation and remediation efforts Perform operational controls, ensuring consistency and compliance across all functions, including procedures, critical use spreadsheets and tool inventory Assist with enhancing overall governance environment within the Operations space Partner with the IT team to perform system related design assessments, control effectiveness testing, process testing, issue resolution monitoring and supporting the sign-off by management of processes and controls in scope Work with model implementation experts and technology teams to design and integrate Python workflows into existing in-house target-state platform for process execution Requirements : Experience : 3 to 8 years Education: Graduate / Post Graduate from Tier 1 institutes - Bachelor's or master’s degree in mathematics, engineering, physics, statistics, or financial mathematics/engineering Foundational skills* Good understanding in numerical analysis, probability theory, linear algebra, and stochastic analysis Proficiency in Python (numpy, pandas, OOP, unittest) and Latex. Prior experience in git, bitbucket, agile view is a plus Understanding of Credit Risk modelling and processes Integrates seamlessly across complex set of stakeholders, internal partners, external resources Strong problem-solving skills and attention to detail Excellent communication and collaboration abilities Ability to thrive in a fast-paced, dynamic environment and adapt to evolving priorities and requirements. Desired Skills:- Excellent communication and collaboration abilities Work Location: Hyderabad & Mumbai Work Timings: 11am to 8pm IST

Posted 1 week ago

Apply

2.0 years

0 - 0 Lacs

Noida

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Job Description The development engineer will be part of a team working on the development of the Illustrator product in our creative suite of products. They will be responsible for the development of new features and maintenance of existing features and will be responsible for all phases of development, from early specs and definition to release. They are expected to be hands-on problem solvers and well conversant in analyzing, architecting,, and implementing high-quality software. Requirements B.Tech. / M.Tech. in Computer Science from a premiere institute. Should have excellent knowledge in the the fundamentals of machine learning and artificial intelligence. Should have hands on experience through ML lifecycle from EDA to model deployment. Should have hands on experience data analysis tools like Jupyter, and packages like Numpy, Matplotlib etc. Should be hands-on in writing code that is reliable, maintainable, secure, performance optimized, multi-platform and world-ready Familiarity with state-of-art deep learning frameworks, such as Tensorflow, PyTorch, Keras, Caffe, Torch. Strong programming skills in C/C++ and Python. Hands-on experience with data synthesis and processing for the purpose of training a model. Relevant work experience in the fields of computer vision and graphics, etc. Experience: 2-4 Years in ML Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

Bareilly

On-site

Position: AI Intern Location: bareilly Key Responsibilities: Assist in developing AI models for personalized health recommendations , predictive analysis, and customer profiling. Support the creation and training of chatbots for consultations, feedback, and follow-ups. Analyze patient data, sales trends , and customer behavior using machine learning techniques. Work on Natural Language Processing (NLP) for symptom recognition and treatment suggestions. Help in building AI-powered dashboards for internal reporting and decision-making. Conduct research on AI trends and their potential application in the Ayurvedic wellness space. Required Skills: Strong understanding of Python and libraries like Pandas, NumPy, Scikit-learn. Exposure to AI/ML concepts like classification, clustering, and recommendation systems. Familiarity with NLP and basic chatbot tools or APIs (Dialogflow, Rasa, etc.). Basic knowledge of healthcare data and patient privacy principles. Strong problem-solving and logical thinking skills. Preferred Qualifications: Pursuing or completed B.Tech/BCA/M.Tech/MCA in Computer Science, AI, Data Science, or related fields. Prior experience or projects in healthtech, AI chatbots, or recommendation systems are a plus. Working knowledge of tools like Jupyter Notebook, GitHub, and REST APIs. What You’ll Gain: Real-world experience applying AI in the Ayurveda and healthcare domain . Exposure to end-to-end AI project development and deployment. Mentorship from tech leaders and wellness experts. Certificate of Completion + chance to convert to a full-time role. Job Type: Internship Contract length: 6 months Pay: ?5,000.00 - ?7,000.00 per month Schedule: Day shift Fixed shift Job Type: Internship Contract length: 6 months Pay: ₹5,000.00 - ₹7,000.00 per month Work Location: In person

Posted 1 week ago

Apply

2.0 years

3 - 5 Lacs

India

On-site

Job Title: AI Engineer Location: Kolkata, India Company: GWC (Global We Connect) Job Type: Full-Time Experience Level: Mid to Senior Level Position Summary We are looking for a talented and proactive AI Engineer to join our growing team in Kolkata. As an AI Engineer at GWC, you will work closely with data scientists, software developers, and product managers to design, develop, and deploy AI/ML models that solve real-world business problems across various domains. Key Responsibilities Design, develop, and deploy scalable machine learning and deep learning models. Collaborate with cross-functional teams to integrate AI solutions into enterprise platforms and applications. Process, clean, and analyze large data sets to uncover trends and patterns. Develop and maintain AI pipelines using tools like TensorFlow, PyTorch, Scikit-learn, etc. Apply NLP, computer vision, or recommendation systems depending on project needs. Research and implement novel algorithms and techniques. Monitor and evaluate model performance and retrain as necessary. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields. 2–5 years of experience in machine learning, AI, or data science roles. Strong programming skills in Python; familiarity with R, Java, or C++ is a plus. Proficiency with ML libraries and frameworks (e.g., TensorFlow, PyTorch, Keras, Scikit-learn). Experience with data processing tools like Pandas, NumPy, and SQL. Familiarity with cloud platforms (AWS, Azure, or GCP) and version control systems like Git. Solid understanding of statistical analysis, data modeling, and algorithm design. Excellent problem-solving abilities and communication skills. Preferred Qualifications Experience with MLOps tools such as MLflow, Airflow, or Kubeflow. Background in deploying AI models in production environments (REST APIs, microservices). Exposure to NLP (spaCy, HuggingFace) or computer vision (OpenCV, YOLO, etc.). Contributions to open-source projects or participation in AI research. Job Type: Full-time Pay: ₹300,000.00 - ₹500,000.00 per year Schedule: Day shift Work Location: In person Application Deadline: 31/07/2025 Expected Start Date: 01/08/2025

Posted 1 week ago

Apply

5.0 years

15 Lacs

Indore

On-site

Job description Out of the box thinker to build innovative AI/ML models : 1. Understand and analyze requirements requiring Machine Learning Models from Product Owners, Customers, and Other Stakeholders 2. Analyze and verify data quality and features 3. Design solutions by choosing the right algorithms, features, and hyperparameters 4. Manage the full lifecycle of ML Models: Data Acquisition, Feature Engineering, Model Development, Training, Verification, Optimization, Deployment, Versioning 5. Augment Enterprise data with publicly available datasets to enrich models features 6. Create strategies for integrating Whiz.AI platform with external enterprise data sources like Databases, Data Warehouses, Analytical Stores, External ML Systems/Algorithms, Hadoop and ERP/CRM systems. Qualifications Technical 5+ years of experience in implementing Machine Learning and Deep Learning models applied to traditional as well as NLP problems Machine Learning-based models: ANN, SVM, Logistic Regression, Gradient Boosting Time Series Anomaly Detections Methods, Hierarchical or Grouped Time Series Forecasting Knowledge of BERT, LSTMs, RNNs, and HMMs applied to Text classification and Text Generation problems Understanding of ML Data processing frameworks like TensorFlow or PyTorch, XGBoost, SciPy, Scikit-Learn, Apache Spark SQL and handling Big Data, databases Excellent knowledge of Python Programming, NumPy, Pandas, and processing JSON, XML, CSV files Non-Technical Good communication analytical skills Self-driven with a strong sense of ownership urgency Preferred Qualifications Preference will be given to the hands-on Deep Learning and NLP application experience Knowledge of Analytical/OLAP/Columnar, Hadoop Ecosystem and NoSQL databases Deep Learning, GANs, Reinforcement Learning R programming, Matlab Knowledge of Lifesciences or Pharmaceutical Industry dataset Interested candidate can share resume at shwetachouhan@valere.io Job Type: Full-time Pay: From ₹1,500,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Education: Bachelor's (Preferred) Experience: software development: 1 year (Preferred) HTML5: 1 year (Preferred) total work: 5 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

3.0 - 11.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: - You'll be Responsible for: Design, develop and implement cutting-edge AI/ML solutions, including Large Language Models (LLMs) and Generative AI applications Lead projects end-to-end while mentoring team members in AI-ML, including traditional ML and emerging AI technologies Drive innovation in AI agent development and orchestration for automated decision-making systems Establish best practices for responsible AI development and deployment What you'd have: AI-ML Engineer or Data Scientist with 3 - 11 years of relevant experience Strong expertise in modern AI frameworks and tools: ML/DL frameworks (TensorFlow, PyTorch, Keras, Sklearn) LLM frameworks (LangChain, LlamaIndex, CrewAI, AutoGen) Vector databases (Pinecone, Weaviate, ChromaDB) Good To Have Hands-on experience with: Generative AI and foundation models Prompt engineering and LLM fine-tuning AI agent development and orchestration RAG (Retrieval-Augmented Generation) systems Proven experience in data preparation, including: Advanced data preprocessing techniques Feature engineering Data quality assessment and improvement Proficiency in data exploration tools (Pandas, Numpy, Polars) Strong programming skills in Python; R is a plus Deep understanding of: Statistics and probability ML/DL algorithms AI governance and responsible AI practices Experience with: Large-scale data processing Model deployment and MLOps Cost optimization for AI systems Track record of 3-5 successful AI/ML projects in production Excellent communication skills and team leadership ability Continuous learning mindset for emerging AI technologies Skills: Statistics and Mathematics Classical Machine Learning Deep Learning and Neural Networks Generative AI and LLMs Agentic AI Development MLOps and Production Engineering Data Engineering Model Optimization and Tuning AI Governance and Ethics Team Leadership Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

About Firstsource Firstsource is a specialized global business process management partner. We provide transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, and other diverse industries.With an established presence in the US, the UK, India, Mexico, Australia, and the Philippines, we act as a trusted growth partner for leading global brands, including several Fortune 500 and FTSE 100 companies. Key Responsibilities Perform data analysis to uncover patterns, trends, and insights to support decision-making. Build, validate, and optimize machine learning models for business use cases in EdTech, Healthcare, BFS and Media. Develop scalable ETL pipelines to preprocess and manage large datasets. Communicate actionable insights through visualizations and reports to stakeholders. Collaborate with engineering teams to implement and deploy models in production (good to have). Core Skills Data Analysis: Expert in Python (Pandas, NumPy), SQL, R, and exploratory data analysis (EDA). Machine Learning: Skilled in Scikit-learn, TensorFlow, PyTorch, and XGBoost for predictive modeling. Statistics: Strong understanding of regression, classification, hypothesis testing, and time-series analysis. Visualization: Proficient in Tableau, Power BI, Matplotlib, and Seaborn. ML Engineering (Good to Have): Experience with model deployment using AWS SageMaker, GCP AI, or Docker. Big Data (Good to Have): Familiarity with Spark, Hadoop, and distributed computing frameworks. ⚠️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies