Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Type: Full Time Experience: 0 Month to 6 Months Type: Face to Face,Written Test Last Date: 21-Aug-2025 Posted on: 22-July-2025 Salary per month: Rs. 12000 - Rs. 15000 No. of vacancies: 6 Passout Year: 2024-2025 Highest qualification mark : 80% Education: BE/B.Tech Branch: BE/B.Tech-Computer Science & Engineering (CSE),BE/B.Tech-Electronics & Communication Engineering(ECE),BE/B.Tech-Information Science/Technology (IS/IT) Sublocation: Madhapur Skills: MicroSoft(MS)SQL,Artificial Intelligence,Python,Machine Learning Assist in developing Python-based applications and scripts supporting AI/ML workflows. Support model building and data analysis using libraries like NumPy, Pandas, Scikit-learn, TensorFlow, or PyTorch. Clean and preprocess datasets for training and validation using best practices in data wrangling. Work under senior developers/data scientists to deploy, test, and validate ML models. Contribute to automation scripts and tools that optimize data pipelines and ML operations. Document code, algorithms, and workflows for internal and external use. Stay updated with emerging trends in Python development and machine learning frameworks. Participate in code reviews and team meetings, ensuring continuous learning and improvement. Good knowledge in SQL.
Posted 1 week ago
0 years
0 Lacs
Kozhikode, Kerala, India
On-site
We are seeking a Data Collection Specialist to join our team. The ideal candidate will be responsible for gathering, organizing, and validating data from various sources, ensuring its accuracy and relevance to support our company’s data-driven decisions. Responsibilities Collect data from various sources, including websites, databases, APIs, and other relevant platforms. Review collected data for accuracy and completeness, identifying and rectifying any errors or inconsistencies. Accurately input data into spreadsheets, databases, or other software systems. Clean and preprocess data to eliminate duplicate records, irrelevant information, and ensure data quality. Organize and categorize data into structured formats to facilitate analysis and reporting. Identify and explore new data sources to enhance the company’s data collection capabilities. Maintain data integrity by conducting regular quality checks and resolving discrepancies. Work closely with cross-functional teams to understand their data needs and provide support in data collection and analysis. Requirements High school diploma or equivalent (IT background candidates preferred) Proven experience in data collection, data entry, or related roles is a plus. Proficiency in data management and spreadsheet software (e.g., Excel, Google Sheets). Strong attention to detail and the ability to identify data discrepancies. Basic knowledge of data validation and cleaning techniques. Good communication skills and the ability to work collaboratively with team members
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: The Specialist - Software Development (Artificial Intelligence) leads the design, development, and implementation of AI and machine learning solutions that address complex business challenges. This role requires expertise in AI algorithms, model development, and software engineering best practices. The individual will work closely with cross-functional teams to deliver intelligent systems that enhance business operations and decision-making. Key Responsibilities: • AI Solution Design & Development: o Lead the development of AI-driven applications and platforms using machine learning, deep learning, and NLP techniques. o Design, train, and optimize machine learning models using frameworks such as TensorFlow, PyTorch, Keras, or Scikit-learn. o Implement advanced algorithms for supervised and unsupervised learning, reinforcement learning, and computer vision. • Software Development & Integration: o Develop scalable AI models and integrate them into software applications using languages such as Python, R, or Java. o Build APIs and microservices to enable the deployment of AI models in cloud environments or on-premise systems. o Ensure that AI models are integrated with back-end systems, databases, and other business applications. • Data Management & Preprocessing: o Collaborate with data scientists and data engineers to gather, preprocess, and analyze large datasets. o Develop data pipelines to ensure the continuous availability of clean, structured data for model training and evaluation. o Implement feature engineering techniques to enhance the accuracy and performance of machine learning models. • AI Model Evaluation & Optimization: o Regularly evaluate AI models using performance metrics (e.g., precision, recall, F1 score) and fine-tune them to improve accuracy. o Perform hyperparameter tuning and cross-validation to ensure robust model performance. o Implement methods for model explainability and transparency (e.g., LIME, SHAP) to ensure trustworthiness in AI decisions. • AI Strategy & Leadership: o Collaborate with business stakeholders to identify opportunities for AI adoption and develop project roadmaps. o Provide technical leadership and mentorship to junior AI developers and data scientists, ensuring adherence to best practices in AI development. o Stay current with AI trends and research, introducing innovative techniques and tools to the team. • Security & Ethical Considerations: o Ensure AI models comply with ethical guidelines, including fairness, accountability, and transparency. o Implement security measures to protect sensitive data and AI models from vulnerabilities and attacks. o Monitor the performance of AI systems in production, ensuring they operate within ethical and legal boundaries. • Collaboration & Cross-Functional Support: o Collaborate with DevOps teams to ensure AI models are deployed efficiently in production environments. o Work closely with product managers, business analysts, and stakeholders to understand requirements and align AI solutions with business needs. o Participate in Agile ceremonies, including sprint planning and retrospectives, to ensure timely delivery of AI projects. • Continuous Improvement & Research: o Conduct research and stay updated with the latest developments in AI and machine learning technologies. o Evaluate new tools, libraries, and methodologies to improve the efficiency and accuracy of AI model development. o Drive continuous improvement initiatives to enhance the scalability and robustness of AI systems. Required Skills & Qualifications: • Bachelor’s degree in computer science, Data Science, Artificial Intelligence, or related field. • 5+ years of experience in software development with a strong focus on AI and machine learning. • Expertise in AI frameworks and libraries (e.g., TensorFlow, PyTorch, Keras, Scikit-learn). • Proficiency in programming languages such as Python, R, or Java, and familiarity with AI-related tools (e.g., Jupyter Notebooks, MLflow). • Strong knowledge of data science and machine learning algorithms, including regression, classification, clustering, and deep learning models. • Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) for deploying AI models and managing data pipelines. • Strong understanding of data structures, databases, and large-scale data processing technologies (e.g., Hadoop, Spark). • Familiarity with Agile development methodologies and version control systems (Git). Preferred Qualifications: • Master’s or PhD in Artificial Intelligence, Machine Learning, or related field. • Experience with natural language processing (NLP) techniques (e.g., BERT, GPT, LSTM, Transformer models). • Knowledge of computer vision technologies (e.g., CNNs, OpenCV). • Familiarity with edge computing and deploying AI models on IoT devices. • Certification in AI/ML or cloud platforms (e.g., AWS Certified Machine Learning, Google Professional Data Engineer).
Posted 1 week ago
1.0 - 3.0 years
2 - 4 Lacs
India
On-site
Job Summary : As an AI Analyst, you will be responsible for analysing complex data sets, developing algorithms, and implementing AI models to solve business problems and improve decision-making processes. You will work closely with cross-functional teams to identify opportunities for AI applications, interpret results, and communicate findings to stakeholders. Key Responsibilities: Data Analysis: Collect, clean, and preprocess large data sets from various sources. Conduct exploratory data analysis to identify trends, patterns, and anomalies. Model Development: Develop, train, and validate machine learning models tailored to specific business needs. Utilize algorithms and statistical techniques to enhance model performance. Collaboration : Work closely with clients team to define AI project objectives and scope. Collaborate on the integration of AI models into existing systems and workflows. Reporting and Visualization: Create dashboards and visualizations to communicate insights derived from data analysis and model results. Present findings and recommendations to non-technical stakeholders in a clear and effective manner. Continuous Improvement: Stay up-to-date with the latest advancements in AI and machine learning technologies. Identify opportunities for process improvements and contribute to the development of best practices. Qualifications: 1. Bachelor’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. A Master’s degree is a plus. 2. Experience : 1 to 3 years ( AI work experience in real time projects is mandatory. Intern experience will not be considered) 2. Proven experience in data analysis, statistical modeling, and machine learning. 3. Proficiency in programming languages such as Python and experience with frameworks like TensorFlow 4. Strong understanding of algorithms, data structures, and software engineering principles. 5. Experience with data visualization tools (e.g., Tableau, Power BI, Matplotlib). 6. Ability to communicate complex technical concepts to non-technical audiences effectively. 7. Strong analytical and problem-solving skills. 8. Familiarity with cloud platforms and exposure to AWS Sage maker is a key requirement. Preferred Skills: 1. Experience with natural language processing (NLP), computer vision, reinforced learning RL. 2. Understanding of ethical considerations in AI and machine learning. 3.Strong understanding of deep learning principles (CNNs, loss functions, optimization) and expertise in computer vision tasks like object detection and image processing. Proficiency in Python programming and relevant deep learning frameworks such as PyTorch or TensorFlow, along with experience in utilizing existing YOLO implementations and adapting them for specific applications Job Type: Full-time Pay: ₹200,000.00 - ₹400,000.00 per year Benefits: Health insurance Work Location: In person
Posted 1 week ago
0 years
0 Lacs
India
Remote
Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 25th July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Firstsource Firstsource is a specialized global business process management partner. We provide transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, and other diverse industries.With an established presence in the US, the UK, India, Mexico, Australia, and the Philippines, we act as a trusted growth partner for leading global brands, including several Fortune 500 and FTSE 100 companies. Key Responsibilities Perform data analysis to uncover patterns, trends, and insights to support decision-making. Build, validate, and optimize machine learning models for business use cases in EdTech, Healthcare, BFS and Media. Develop scalable ETL pipelines to preprocess and manage large datasets. Communicate actionable insights through visualizations and reports to stakeholders. Collaborate with engineering teams to implement and deploy models in production (good to have). Core Skills Data Analysis: Expert in Python (Pandas, NumPy), SQL, R, and exploratory data analysis (EDA). Machine Learning: Skilled in Scikit-learn, TensorFlow, PyTorch, and XGBoost for predictive modeling. Statistics: Strong understanding of regression, classification, hypothesis testing, and time-series analysis. Visualization: Proficient in Tableau, Power BI, Matplotlib, and Seaborn. ML Engineering (Good to Have): Experience with model deployment using AWS SageMaker, GCP AI, or Docker. Big Data (Good to Have): Familiarity with Spark, Hadoop, and distributed computing frameworks. ⚠️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a Computer Vision Intern to assist in building and refining our image recognition pipeline. The role will start with dataset management—image collection, annotation validation, dataset cleaning, and preprocessing. Once the foundational data work is complete, you’ll get hands-on exposure to model training, augmentation, and evaluation, contributing directly to our production-ready pipeline. For one in the Seat: Responsibilities Organize, clean, and preprocess large-scale retail image datasets. Validate and manage annotations (bounding boxes, class labels, segmentation masks if applicable) using tools like Roboflow or CVAT or LabelImg. Apply augmentation techniques and prepare datasets for training. Support in training YOLOv5/YOLOv8-based models on custom datasets. Run model evaluations (Precision, Recall, F1 Score, SKU-level accuracy). Collaborate with the product team to improve real-world inference quality. Document the dataset pipeline and share insights for improving data quality. Who we're looking for: Must Have: Basic understanding of Computer Vision concepts (Object Detection, Classification) Familiarity with Python (OpenCV, Pandas, NumPy) Knowledge of image annotation tools (Roboflow, LabelImg, CVAT, etc.) Ability to manage and organise large datasets Good to have: Experience with YOLOv5 or YOLOv8 (Training, Inference, Fine-tuning) Exposure to image augmentation techniques (Albumentations, etc.) Understanding of retail/commercial shelf datasets or product detection problems Previous internship or project experience in computer vision is a plus
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: As a Data Scientist, you will have the opportunity to apply your analytical skills and expertise to extract meaningful insights from vast amounts of data. We are currently seeking a talented and experienced individual to join our team and contribute to our data-driven decision-making process. Objectives: As a Salesforce Data Cloud, EINSTEIN, Agentforce Software Engineer, you'll combine technical skills on Salesforce Data cloud, Agentforce(GENETIC AI) ,business acumen and development qualities to build solutions that enable transformative capabilities within Takeda. You will support the Platform Owner in onboarding and facilitating the effective utilization of these platforms. You will play a key role in developing and building GENAI use cases leveraging LLM(Open-AI, chatGpt4.0 etc), Salesforce Data cloud, EINSTEIN and Agentforce . Key Responsibilities: Act as the technical developer for building agentic UI experience for our commercial and medical users Clarification of technical details in backlog items and technical design and build of backlog items. Set standards & best practices within capability, and encourage adherence to those. Proactively address any risks & issues together with the team, and raise to program/portfolio oversight, as required Ensure appropriate level of technical documentation. Cross-pollination of applicable standards & practices between teams. Discussion on business requirements with stakeholders. Pro-active suggestions of new features or tools. Alignment & communication with other service lines on overarching practices & approaches. Ensure re-useability of features within business functions. Build and optimize generative AI applications using models like GPT, LLaMA, Gemini, OpenAI or custom models Create effective prompts and fine-tune models for specific domains, ensuring high accuracy and relevance. Develop APIs, microservices, and integrate generative AI models into production environments. Collect, clean, and preprocess datasets for model training and evaluation. Conduct experiments with various model architectures, hyperparameters, and training strategies to enhance performance. Work closely with product managers, data scientists, and engineers to define requirements and deliver AI-powered solutions. Implement AI governance, bias detection, and responsible AI practices in accordance with ethical and regulatory standards. Keep up with the latest research and advancements in generative AI, NLP, and multimodal AI technologies. Should understand capabilities of salesforce agentforce and datacloud. EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS: Bachelor’s degree in computer science or a related study, or equivalent experience 3+ years of relevant professional experience in working in native UI/UX capabilities(react.js. HTML. CSS) Strong understanding of UI Framework(Bootstrap, Material UI, Semantic UI) Experience in salesforce data cloud and Agentforce, Proficiency in Python: , TensorFlow, PyTorch, or similar ML frameworks. Experience with Large Language Models (LLMs): , transformers: , and generative architectures: Knowledge of prompt engineering: and fine-tuning techniques: . Familiarity with APIs: , REST, and microservice development. Experience in Agile development using agile methods like Scrum and/or Kanban Excellent oral and written communication skills, business acumen, and enterprise knowledge. Strong experience in design or in implementing solutions or products and preferable experience in quality improvement. Understand design thinking and can explain and convince stakeholders. Understands pharma and experience working in AI space with pharma customers Work with virtual/agile teams in different locations, aligning and adapting different work, culture, and communication styles Proficiency in English in both verbal and written communication is a must. WHAT TAKEDA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Ciklum is looking for a Data Science Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Science Engineer, become a part of a cross-functional development team engineering experiences of tomorrow. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses onthe ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Collaborate with engineers, data scientists, and business analysts to understand requirements, refine models, and integrate LLMs into AI solutions Incorporate RLHF and advanced techniques for tax-specific AI outputs Embed generative AI solutions into consolidation, reconciliation, and reporting processes Leverage LLMs to interpret unstructured tax documentation Development and implementation of Deep learning algorithms for AI solutions Stay updated with recent trends in GENAI and apply the latest research and techniques to projects Preprocess raw data, including text normalization, tokenization, and other techniques, to make it suitable for use with NLP models Setup and train large language models and other state-of-the-art neural networks Conduct thorough testing and validation to ensure accuracy and reliability of model implementations Perform statistical analysis of results and optimize model performance for various computational environments, including cloud and edge computing platforms Explore and propose innovative AI use cases to enhance tax functions Partner with tax, finance, and IT teams to integrate AI workflows Collaborate with legal teams to meet regulatory standards for tax data Perform model audits to identify and mitigate risks Monitor and optimize generative models for performance and scalability Requirements: Solid understanding of object-oriented design patterns, concurrency/multithreading, and scalable AI and GenAI model deployment Strong programming skills in Python, PyTorch, TensorFlow, and related libraries Proficiency in RegEx, Spacy, NLTK, and NLP techniques for text representation and semantic extraction Hands-on experience in developing, training, and fine-tuning LLMs and AI models Practical understanding and experience in implementing techniques like CNN, RNN, GANs, RAG, Langchain, and Transformers Expertise in Prompt Engineering techniques and various vector databases Familiarity with Azure Cloud Computing Platform Experience with Docker, Kubernetes, CI/CD pipelines Experience with Deep learning, Computer Vision, CNN, RNN, LSTM Experience with Vector Databases (Milvus, Postgres) What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
1.0 - 3.0 years
5 - 6 Lacs
Mohali
On-site
What You Need for this Position: Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. Proven experience (1-3 years) in machine learning, data science, or AI roles. Proficiency in programming languages such as Python, R, or Java. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of algorithms, data structures, and software design principles. Familiarity with cloud platforms (e.g., AWS, Azure) and big data technologies (e.g., Hadoop, Spark). Excellent problem-solving skills and analytical thinking. Strong communication and collaboration skills. Ability to work methodically and meet deadlines. What You Will Be Doing: Develop and implement machine learning models and algorithms for various applications. Collaborate with cross-functional teams to understand project requirements and deliver AI solutions. Preprocess and analyze large datasets to extract meaningful insights. Design and conduct experiments to evaluate model performance and fine-tune algorithms. Deploy machine learning models to production and ensure scalability and reliability. Stay updated with the latest advancements in AI and machine learning technologies. Document model development processes and maintain comprehensive project documentation. Participate in code reviews and provide constructive feedback to team members. Contribute to the continuous improvement of our AI/ML capabilities and best practices. Top Reasons to Work with Us: Join a fast-paced team of like-minded individuals who share the same passion as you with whom you'll tackle new challenges every day. Work alongside an exceptionally talented and intellectual team, gaining exposure to new concepts and technologies. Enjoy a friendly and high-growth work environment that fosters learning and development. Competitive compensation package based on experience and skill. Job Type: Full-time Pay: ₹500,000.00 - ₹600,000.00 per year Schedule: Day shift Fixed shift Morning shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: total work: 3 years (Required) Work Location: In person
Posted 1 week ago
0 years
0 - 1 Lacs
Gāndhīnagar
On-site
FinDocGPT Internship Opportunity – AI/ML & GenAI Developer Intern Company : ArgyleEnigma Tech Labs Product : FinDocGPT – India’s first AI-powered assistant that decodes complex financial documents in simple, regional language Stipend : ₹8,000 – ₹12,000/month Duration : 6 months Start Date : Immediate About Us At FinDocGPT, we are building a revolutionary product that empowers millions of Indians to understand financial documents like health insurance, loans, and mutual fund T&Cs in their own language – using cutting-edge AI/ML and GenAI technologies . We’re backed by Google for Startups, supported by Reserve Bank Innovation Hub and on a mission to bridge the financial literacy gap in India. Intern Role – AI/ML & Generative AI Developer We are looking for highly motivated interns who want to gain hands-on experience in deploying GenAI models, NLP pipelines, and ML-based document processing. Responsibilities Work on document classification , NER , and summarization using LLMs like LLaMA, Mistral, Groq, and open-source models. Fine-tune or use APIs like LangChain, HuggingFace, OpenAI, and Google Vertex AI Preprocess, clean, and structure financial documents (PDFs, scans, emails, etc.) Implement prompt engineering & RAG-based workflows Collaborate with product & design to build smart, user-friendly interfaces Requirements Strong understanding of Python , NLP , and basic ML concepts Familiarity with transformers, BERT, T5, or GPT architectures Experience (even academic) with HuggingFace , LangChain , or LLM APIs Bonus: Exposure to Google Cloud, AWS , or Docker Hunger to learn, experiment fast, and make real impact What You’ll Gain Direct mentorship from industry experts (ex-Morgan Stanley, PIMCO, Google) Real-world exposure to India-first AI use cases Opportunity to convert into full-time role based on performance Work that will directly impact financial inclusion in India How to Apply? Send your CV, GitHub/portfolio, and 2–3 lines on why you want this role to info@arglyeenigma.com or https://tinyurl.com/hr-aetl with subject: “Internship Application – FinDocGPT” Job Type: Internship Contract length: 6 months Pay: ₹8,000.00 - ₹12,000.00 per month Ability to commute/relocate: Gandhinagar, Gujarat: Reliably commute or planning to relocate before starting work (Required) Work Location: In person Expected Start Date: 01/08/2025
Posted 1 week ago
0.0 - 2.0 years
0 Lacs
India
On-site
Job Title: ML Engineer/Data Scientist Duration: 12 Months Location: PAN INDIA Timings: Full Time (As per company timings) Notice Period: within 15 days or immediate joiner Experience: 0- 2 years About The Job We are seeking a highly motivated and experienced ML Engineer/Data Scientist to join our growing ML/GenAI team. You will play a key role in designing, developing ML applications by evaluating models, training and/or fine tuning them. You will play a crucial role in developing Gen AI based solutions for our customers. As a senior member of the team, you will take ownership of projects, collaborating with engineers and stakeholders to ensure successful project delivery. What We're Looking For At least 1-2 years of experience in designing & building AI applications for customer and deploying them into production At least 1-2 years of Software engineering experience in building Secure, scalable and performant applications for customers. Experience with Document extraction using AI, Conversational AI, Vision AI, NLP or Gen AI. Design, develop, and operationalize existing ML models by fine tuning, personalizing it. Evaluate machine learning models and perform necessary tuning. Develop prompts that instruct LLM to generate relevant and accurate responses. Collaborate with data scientists and engineers to analyze and preprocess datasets for prompt development, including data cleaning, transformation, and augmentation. Conduct thorough analysis to evaluate LLM responses, iteratively modify prompts to improve LLM performance. Hands on customer experience with RAG solution or fine tuning of LLM model. Build and deploy scalable machine learning pipelines on GCP or any equivalent cloud platform involving data warehouses, machine learning platforms, dashboards or CRM tools. Experience working with the end-to-end steps involving but not limited to data cleaning, exploratory data analysis, dealing outliers, handling imbalances, analyzing data distributions (univariate, bivariate, multivariate), transforming numerical and categorical data into features, feature selection, model selection, model training and deployment. Proven experience building and deploying machine learning models in production environments for real life applications Good understanding of natural language processing, computer vision or other deep learning techniques. Expertise in Python, Numpy, Pandas and various ML libraries (e.g., XGboost, TensorFlow, PyTorch, Scikit-learn, LangChain). Familiarity with Google Cloud or any other Cloud Platform and its machine learning services. Excellent communication, collaboration, and problem-solving skills. Good to Have Google Cloud Certified Professional Machine Learning or TensorFlow Certified Developer certifications or equivalent. Experience of working with one or more public cloud platforms - namely GCP, AWS or Azure. Experience with Amazon Lex or Google DialogFlow CX or Microsoft Copilot studio for CCAI Agent workflows Experience with AutoML and vision techniques. Master’s degree in statistics, machine learning or related fields.
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
AI Engineer Position: 1 Job Title: AI Engineer (Multimodal RAG, Vector Database, and LLM Implementation) Experience Level: Mid to Senior-Level (5-7 Years) Job Overview: We are seeking a highly skilled AI Engineer with expertise in Multimodal Retrieval-Augmented Generation (RAG), Vector databases, and Large Language Model (LLM) implementation. The ideal candidate will have a strong background in integrating structured and unstructured data into AI models and deploying these models in real-world applications. This role involves working on cutting-edge AI solutions, including the development and optimization of multimodal systems that leverage both text and visual data. Key Responsibilities: · Multimodal RAG Implementation: o Design, develop, and deploy Multimodal Retrieval-Augmented Generation (RAG) systems that integrate both structured (e.g., databases, tables) and unstructured data (e.g., text, images, videos). o Work with large-scale datasets, combining different data types to enhance the performance and accuracy of AI models. o Implement and fine-tune LLMs (e.g., GPT, BERT) to work effectively with multimodal inputs and outputs. · Vector Database Integration: o Develop and optimize AI models using vector databases to efficiently manage and retrieve high-dimensional data. o Implement vector search techniques to improve information retrieval from structured and unstructured data sources. o Ensure the scalability and performance of vector-based retrieval systems in production environments. · LLM Implementation and Optimization: o Implement and fine-tune large language models to handle complex queries involving multimodal data. o Optimize LLMs for specific tasks, such as text generation, question answering, and content summarization, using both structured and unstructured data. o Integrate LLMs with vector databases and RAG systems to enhance AI capabilities. · Data Integration and Processing: o Work with data engineers and data scientists to preprocess and integrate structured and unstructured data for AI model training and inference. o Develop data pipelines that handle the ingestion, transformation, and storage of diverse data types. o Ensure data quality and consistency across different data sources and formats. · Model Evaluation and Testing: o Evaluate the performance of multimodal AI models using various metrics, ensuring they meet accuracy, speed, and robustness requirements. o Conduct A/B testing and model validation to continuously improve AI system performance. o Implement automated testing and monitoring tools to ensure model reliability in production. · Collaboration and Communication: o Collaborate with cross-functional teams, including data engineers, data scientists, and software developers, to deliver AI-driven solutions. o Communicate complex technical concepts to non-technical stakeholders and provide insights on the impact of AI models on business outcomes. o Stay up to date with the latest advancements in AI, LLMs, vector databases, and multimodal systems, and share knowledge with the team. Qualifications: · Technical Skills: o Strong expertise in Multimodal Retrieval-Augmented Generation (RAG) systems. o Proficiency in vector databases (e.g., Pinecone, Milvus, Weaviate, Chroma) and vector search techniques with recommender systems, vector search capabilities. o Experience with LLMs (e.g., GPT, BERT) and their implementation in real-world applications. Experience with Mistral AI is a plus. o Solid understanding of machine learning and deep learning frameworks (e.g., TensorFlow, PyTorch, MLflow etc). o Experience working with structured data (e.g., SQL databases) and unstructured data (e.g., text, images, videos). o Proficiency in programming languages such as Python, with experience in relevant libraries and tools. · Experience: o 2+ years of experience in AI/ML engineering, with a focus on multimodal systems and LLMs. o Proven track record of deploying AI models in production environments. o Experience with cloud platforms preferably Azure, and MLOps practices is preferred.
Posted 1 week ago
0.0 years
0 - 0 Lacs
Gandhinagar, Gujarat
On-site
FinDocGPT Internship Opportunity – AI/ML & GenAI Developer Intern Company : ArgyleEnigma Tech Labs Product : FinDocGPT – India’s first AI-powered assistant that decodes complex financial documents in simple, regional language Stipend : ₹8,000 – ₹12,000/month Duration : 6 months Start Date : Immediate About Us At FinDocGPT, we are building a revolutionary product that empowers millions of Indians to understand financial documents like health insurance, loans, and mutual fund T&Cs in their own language – using cutting-edge AI/ML and GenAI technologies . We’re backed by Google for Startups, supported by Reserve Bank Innovation Hub and on a mission to bridge the financial literacy gap in India. Intern Role – AI/ML & Generative AI Developer We are looking for highly motivated interns who want to gain hands-on experience in deploying GenAI models, NLP pipelines, and ML-based document processing. Responsibilities Work on document classification , NER , and summarization using LLMs like LLaMA, Mistral, Groq, and open-source models. Fine-tune or use APIs like LangChain, HuggingFace, OpenAI, and Google Vertex AI Preprocess, clean, and structure financial documents (PDFs, scans, emails, etc.) Implement prompt engineering & RAG-based workflows Collaborate with product & design to build smart, user-friendly interfaces Requirements Strong understanding of Python , NLP , and basic ML concepts Familiarity with transformers, BERT, T5, or GPT architectures Experience (even academic) with HuggingFace , LangChain , or LLM APIs Bonus: Exposure to Google Cloud, AWS , or Docker Hunger to learn, experiment fast, and make real impact What You’ll Gain Direct mentorship from industry experts (ex-Morgan Stanley, PIMCO, Google) Real-world exposure to India-first AI use cases Opportunity to convert into full-time role based on performance Work that will directly impact financial inclusion in India How to Apply? Send your CV, GitHub/portfolio, and 2–3 lines on why you want this role to info@arglyeenigma.com or https://tinyurl.com/hr-aetl with subject: “Internship Application – FinDocGPT” Job Type: Internship Contract length: 6 months Pay: ₹8,000.00 - ₹12,000.00 per month Ability to commute/relocate: Gandhinagar, Gujarat: Reliably commute or planning to relocate before starting work (Required) Work Location: In person Expected Start Date: 01/08/2025
Posted 1 week ago
0.0 - 3.0 years
5 - 6 Lacs
Mohali, Punjab
On-site
What You Need for this Position: Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. Proven experience (1-3 years) in machine learning, data science, or AI roles. Proficiency in programming languages such as Python, R, or Java. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of algorithms, data structures, and software design principles. Familiarity with cloud platforms (e.g., AWS, Azure) and big data technologies (e.g., Hadoop, Spark). Excellent problem-solving skills and analytical thinking. Strong communication and collaboration skills. Ability to work methodically and meet deadlines. What You Will Be Doing: Develop and implement machine learning models and algorithms for various applications. Collaborate with cross-functional teams to understand project requirements and deliver AI solutions. Preprocess and analyze large datasets to extract meaningful insights. Design and conduct experiments to evaluate model performance and fine-tune algorithms. Deploy machine learning models to production and ensure scalability and reliability. Stay updated with the latest advancements in AI and machine learning technologies. Document model development processes and maintain comprehensive project documentation. Participate in code reviews and provide constructive feedback to team members. Contribute to the continuous improvement of our AI/ML capabilities and best practices. Top Reasons to Work with Us: Join a fast-paced team of like-minded individuals who share the same passion as you with whom you'll tackle new challenges every day. Work alongside an exceptionally talented and intellectual team, gaining exposure to new concepts and technologies. Enjoy a friendly and high-growth work environment that fosters learning and development. Competitive compensation package based on experience and skill. Job Type: Full-time Pay: ₹500,000.00 - ₹600,000.00 per year Schedule: Day shift Fixed shift Morning shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: total work: 3 years (Required) Work Location: In person
Posted 1 week ago
2.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
Job Title: AI/ML Engineer Location: Hybrid (Remote & Onsite) Job Type: Full-time Experience Level: Mid Job Summary: We are seeking a skilled AI/ML Engineer to design, develop, and implement machine learning models and AI-driven solutions. The ideal candidate will work closely with cross-functional teams to integrate AI into our products and services, ensuring efficiency and scalability. Key Responsibilities: Develop and deploy machine learning models and AI algorithms. Preprocess and analyze large datasets for training and validation. Optimize model performance and scalability in real-world applications. Design and implement AI-driven solutions for predictive analytics and automation. Collaborate with software engineers, data scientists, and domain experts to integrate AI solutions. Monitor and maintain deployed models, ensuring continuous improvements. Develop and maintain APIs for AI-driven chat responses and data ingestion. Ensure AI models are efficiently stored and retrieved from vector databases. Stay updated with the latest trends and advancements in AI and ML. Required Skills & Qualifications: Strong programming skills in Python. Experience with machine learning algorithms, deep learning, and data preprocessing techniques. Expertise in web frameworks, particularly FastAPI. Proficiency in working with structured and unstructured datasets. Experience with databases including MongoDB, PostgreSQL, and Weaviate (vector databases, semantic search, hybrid search, distance metrics). Strong understanding of Retrieval-Augmented Generation (RAGs) and Large Language Models (LLMs), including prompt engineering and NLP techniques. Knowledge of vector embeddings and semantic search. Experience in infrastructure management, including Docker, Kubernetes, and cloud platforms (GCP/Azure). Experience in MLOps and CI/CD for AI models. Ability to design and deploy Merchant-specific API endpoints for AI-driven chat responses and data ingestion. Strong problem-solving and analytical skills. Educational Qualifications: Bachelor’s/master’s degree in computer science, Artificial Intelligence, Data Science, or a related field. Preferred Experience: 2+ years of experience in AI/ML model development and deployment. Hands-on experience with large-scale data processing and model optimization. Experience in productionizing AI solutions in real-world applications. Experience with OpenAPI (Swagger) for API documentation an
Posted 1 week ago
0.0 years
0 - 0 Lacs
Kundrathur, Chennai, Tamil Nadu
On-site
Collect and organize data from various sources. Clean and preprocess data to ensure its accuracy and reliability. Assist with data analysis and generate reports. Support the creation of data visualizations and dashboards. Collaborate with other team members to provide data insights. Maintain and update databases and data systems. Ensure data compliance with organizational policies and regulations. Only Tamil candidate and fresher Only Tamil Nadu candidates Job Types: Full-time, Permanent, Fresher Pay: ₹15,086.00 - ₹32,833.16 per month Benefits: Flexible schedule Food provided Health insurance Life insurance Schedule: Day shift Supplemental Pay: Joining bonus Performance bonus Work Location: In person
Posted 1 week ago
4.0 years
0 Lacs
Hyderābād
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description: Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 4 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, Agentic Framework to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 4 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models Utilize optimization tools and techniques, including MIP (Mixed Integer Programming. Deep knowledge of classical AIML (regression, classification, time series, clustering) Drive DevOps and MLOps practices, covering CI/CD and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
JOB DESCRIPTION We're seeking top talents for our AI engineering team to develop high-quality machine learning models, services, and scalable data processing pipelines. Candidates should have a strong computer science background and be ready to handle end-to-end projects, focusing on engineering. As an Applied AI ML Senior Associate within the Digital Intelligence team at JPMorgan, collaborate with all lines of business and functions to deliver software solutions. Experiment, develop, and productionize high-quality machine learning models, services, and platforms to make a significant impact on technology and business. Design and implement highly scalable and reliable data processing pipelines. Perform analysis and insights to promote and optimize business results. Contribute to a transformative journey and make a substantial impact on a wide range of customer products. Job Responsibilities Research, develop, and implement machine learning algorithms to solve complex problems related to personalized financial services in retail and digital banking domains. Work closely with cross-functional teams to translate business requirements into technical solutions and drive innovation in banking products and services. Collaborate with product managers, key business stakeholders, engineering, and platform partners to lead challenging projects that deliver cutting-edge machine learning-driven digital solutions. Conduct research to develop state-of-the-art machine learning algorithms and models tailored to financial applications in personalization and recommendation spaces. Design experiments, establish mathematical intuitions, implement algorithms, execute test cases, validate results, and productionize highly performant, scalable, trustworthy, and often explainable solutions. Collaborate with data engineers and product analysts to preprocess and analyze large datasets from multiple sources. Stay up-to-date with the latest publications in relevant Machine Learning domains and find applications for the same in your problem spaces for improved outcomes. Communicate findings and insights to stakeholders through presentations, reports, and visualizations. Work with regulatory and compliance teams to ensure that machine learning models adhere to standards and regulations. Mentor Junior Machine Learning associates in delivering successful projects and building successful careers in the firm. Participate and contribute back to firm-wide Machine Learning communities through patenting, publications, and speaking engagements. Required qualifications, capabilities and skills Expert in at least one of the following areas: Natural Language Processing, Knowledge Graph, Computer Vision, Speech Recognition, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis. Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics. Demonstrated expertise in machine learning frameworks: Tensorflow, Pytorch, pyG, Keras, MXNet, Scikit-Learn. Strong programming knowledge of python, spark; Strong grasp on vector operations using numpy, scipy; Strong grasp on distributed computation using Multithreading, Multi GPUs, Dask, Ray, Polars etc. Strong analytical and critical thinking skills for problem solving. Excellent written and oral communication along with demonstrated teamwork skills. Demonstrated ability to clearly communicate complex technical concepts to both technical and non-technical audiences Experience in working in interdisciplinary teams and collaborating with other researchers, engineers, and stakeholders. A strong desire to stay updated with the latest advancements in the field and continuously improve one's skills Preferred qualification, capabilities and skills Deep hands-on experience with real-world ML projects, either through academic research, internships, or industry roles. Experience with distributed data/feature engineering using popular cloud services like AWS EMR Experience with large scale training, validation and testing experiments. Experience with cloud Machine Learning services in AWS like Sagemaker. Experience with Container technology like Docker, ECS etc. Experience with Kubernetes based platform for Training or Inferencing. Contributions to open-source ML projects can be a plus. Participation in ML competitions (e.g., Kaggle) and hackathons demonstrating practical skills and problem-solving abilities. Understanding of how ML can be applied to various domains like healthcare, finance, robotics, etc. ABOUT US
Posted 1 week ago
3.0 years
0 Lacs
India
On-site
Job Summary: We are looking for a skilled Machine Learning Engineer to design, develop, and deploy cutting-edge AI agents and enhance our suite of office productivity applications with intelligent features. The ideal candidate will have a strong background in machine learning principles, hands-on experience with various ML frameworks, and a passion for creating practical, user-centric AI solutions. Responsibilities: Design, develop, and deploy machine learning models for AI agents to automate tasks, provide intelligent assistance, and improve user workflows. Integrate AI capabilities into existing and new office applications (e.g., natural language processing for document analysis, intelligent scheduling, data insights). Research and implement state-of-the-art machine learning algorithms and techniques. Collect, preprocess, and analyze large datasets to train and evaluate ML models. Collaborate with product managers, software engineers, and UX/UI designers to define requirements and deliver high-quality AI-powered features. Optimize ML models for performance, scalability, and efficiency. Monitor and maintain deployed models, ensuring their accuracy and reliability. Stay up-to-date with the latest advancements in AI, machine learning, and office productivity technologies. Qualifications: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Electrical Engineering, or a related quantitative field. Proven experience (3 years) as a Machine Learning Engineer or similar role. Strong programming skills in Python. Proficiency with popular machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Experience with natural language processing (NLP) techniques, including text classification, named entity recognition, sentiment analysis, and language generation. Demonstrated understanding and practical experience with cloud cost optimization techniques for ML workloads (e.g., resource rightsizing, auto-scaling, serverless functions, leveraging free tiers). Familiarity with developing conversational AI or chatbot systems. Solid understanding of data structures, algorithms, and software design principles. Experience with cloud platforms (e.g., AWS, Azure, GCP) for deploying ML models is a plus. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team. Bonus Points (Optional): Experience with front-end development frameworks (e.g., React, Angular) for integrating AI features into user interfaces. Knowledge of MLOps practices for model deployment, monitoring, and versioning. Familiarity with agile development methodologies. Contributions to open-source projects. Job Types: Full-time, Permanent Schedule: Monday to Friday
Posted 1 week ago
1.0 years
2 - 6 Lacs
India
On-site
Key Responsibilities: Develop Python Code: Write efficient, maintainable code for AI/ML algorithms and systems. Model Development: Design, train, and deploy machine learning models (supervised, unsupervised, deep learning). Data Processing: Clean, preprocess, and analyze large datasets. Collaboration: Work with cross-functional teams to integrate AI/ML models into products. Optimization: Fine-tune models for performance, accuracy, and scalability. Required Skills: Strong Python programming skills, with experience in libraries like NumPy, pandas, scikit-learn, TensorFlow, or Py Torch. Hands-on experience in AI/ML model development and deployment. Knowledge of data preprocessing, feature engineering, and model evaluation. Familiarity with cloud platforms (AWS, Google Cloud, Azure) and version control (Git). Degree in Computer Science, Data Science, or a related field. Job Type: Full-time Pay: ₹20,000.00 - ₹50,000.00 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore Pardesipura, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) Work Location: In person
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Hyderabad, TG, IN, 500081 Let's play together About Our Company Fortuna has become an established brand among customers within just a few years. We became a proud international Family of companies carrying Fortuna Entertainment Group from the first betting shop. We want to go further and be known for having the best tech department offering our employees the usage of modern technologies, and being part of many exciting projects. Our new home is the remarkable Churchill II building which has a view of Prague. Every detail underlines the company's corporate culture and represents our values. The workplace layout is 100% ecological, providing ideal conditions for everyday work. We all work as one team and treat each other with respect, openness, a sense of honor and respect for individual and cultural differences. Hey there! We're Fortuna Entertainment Group, and we’re excited to share why we’re a team worth joining. Who We Are? Founded in 1990, FEG is a top player in the betting and gaming industry. We proudly serve millions of customers across five European countries – Czech Republic, Slovakia, Poland, Romania, and Croatia – with our Business Intelligence operations based in India. Why Join Us? At FEG India, you’ll be part of a team that’s powering the digital future of one of Central and Eastern Europe’s leading betting and gaming operators. We’re a growing tech hub delivering high-quality solutions in Data, BI, AI/ML, Development, and IT Services. Your work here directly supports global operations — and we make sure our people grow with us. Current Opportunity Right now, we´re seeking a Data Scientist to drive Data science in FEG business and build statistical and ML based models, and insights. What You’ll Be Doing Your daily activities will include, but not limited to: Retrieve the data from different data sources i.e., Data Lake and Data Ware House using SQL, Python or PySpark Cleaning and preprocess the data for quality and accuracy Perform EDA to identify patterns, trends and insights using statistical methods Develop and implement machine learning models for customer behavior, revenues, campaigns Enhance the existing models by adding more features and expand it to other markets / teams Working with other teams such as engineering, product development, and marketing to understand their data needs and provide actionable insights Support the data analytics team in improving analytical models with data science-based insights Develop and provide MLOps framework on Microsoft Azure for deploying ML models (good to have) What We’re Looking For A degree in Computer Science / Information Technology Fluent communication skills in English both written and verbal Team player who can share experience with the team and grow them technically High quantitative and cognitive ability to solve complex problem and think with a vision You Should Have Experience In At least 4 years of experience of working in data roles – Data Analyst, Business Analyst and Data Scientist 3 + years of experience in programming and data science, MLOps and automation of models Sound practical knowledge of working with advanced statistical algorithms Great understanding of all machine learning techniques and its application on business problems Experience with ML on MS Azure stack in combination with Python, SQL Strong business understanding. Worked in developing analytics solutions in 2-3 Domains Experience in building and productionizing ML platforms. Work And Personality Characteristics Required Data Driven Technical Thinking Problem-solving Communication skills Critical Thinking Task Management Proactive Why You’ll Love It Here We are the biggest omni-channel betting and gaming operator in Central and Eastern Europe, B2C High-tech Entertainment company Hybrid Working Arrangements (work from home) Flexible working hours Interesting innovative projects Cooperation across 5 markets and departments, international teams Variety of Tasks, where Problem-Solving and Creativity is needed Advancement, Promotions and Career opportunities for talents Skill Development & Learning options – both individual and team, development goals filled by individualised development plans Welcoming Atmosphere, open and informal culture and dress code, friendly colleagues, strong eNPS scores If this sounds like your kind of place, let us know by applying! We can’t wait to hear from you. Offices at FEG Your browser does not support the video tag.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description Acronotics Limited specializes in cutting-edge robotic automation and artificial intelligence solutions. By applying human intelligence to build advanced AI-fueled systems, Acronotics transforms businesses with technologies like AI and Robotic Process Automation (RPA). As a consulting and services firm, we are dedicated to creating automated solutions that will redefine how products are made, sold, and consumed. Our mission is to help clients implement and run game-changing robotic automation and artificial intelligence-based solutions. Explore our product, Radium AI, which automates bot monitoring and support activities, on our website: Radium AI. Role Description This is a full-time, on-site role for a Data Engineer (Power BI) based in Bengaluru. You will design and manage data pipelines that connect Power BI, OLAP cubes, documents (pdfs, presentations) and external data sources to Azure AI. Your role ensures structured and unstructured financial data is indexed and accessible for semantic search and LLM use. Key Responsibilities: Extract data from Power BI datasets, semantic models, and OLAP cubes. Connect and transform data via Azure Synapse, Data Factory, and Lakehouse architecture. Preprocess PDFs, PPTs, and Excel files using Azure Form Recognizer or Python-based tools. Design data ingestion pipelines for external web sources (e.g., commodity prices). Coordinate with AI engineers to feed cleaned and contextual data into vector indexes. Requirements: Strong experience with Power BI REST/XMLA APIs. Expertise in OLAP systems (SSAS, SAP BW), data modelling, and ETL design. Hands-on experience with Azure Data Factory, Synapse, or Data Lake. Familiarity with JSON, DAX, M queries.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
ISP Data Science - Analyst Role Profile Location: Bangalore, India Purpose of Role We are seeking a highly skilled and data-driven Data Science - Analyst to join our team. The ideal candidate will leverage advanced data analytics and AI techniques along with business heuristics to analyse student enrolment and retention data, identify trends, and provide actionable insights to support ISP and its schools’ enrolment goals. This role is critical for improving student experiences, optimising resource allocation, and enhancing overall enrolment and retention performance. The successful candidate will bring strong expertise in Python or equivalent-based statistical modelling (including propensity modelling), experience with Azure Databricks for scalable data workflows, and advanced skills in Power BI to build high-impact visualisations and dashboards. The role requires both technical depth and the ability to translate complex insights into strategic recommendations. ISP Principles Begin with our children and students. Our children and students are at the heart of what we do. Simply, their success is our success. Wellbeing and safety are both essential for learners and learning. Therefore, we are consistent in identifying potential safeguarding and Health & Safety issues and acting and following up on all concerns appropriately. Treat everyone with care and respect. We look after one another, embrace similarities and differences and promote the well-being of self and others. Operate effectively. We focus relentlessly on the things that are most important and will make the most difference. We apply school policies and procedures and embody the shared ideas of our community. Are financially responsible. We make financial choices carefully based on the needs of the children, students and our schools. Learn continuously. Getting better is what drives us. We positively engage with personal and professional development and school improvement. ISP Data Science - Analyst Key Responsibilities Data Analysis: Collect, clean, and preprocess, enrolment, retention, and customer satisfaction data from multiple sources. Analyse data to uncover trends, patterns, and factors influencing enrolment, retention, and customer satisfaction. AI and Machine Learning Implementation: Expertise in developing and deploying propensity models to support customer acquisition and retention activities and strategy. Experience with Azure, Databricks (and other equivalent platforms) for scalable data engineering and machine learning workflows. Develop and implement AI models, such as predictive analytics and propensity models to forecast enrolment patterns and retention risks. Use machine learning algorithms to identify high-risk student populations and recommend intervention strategies. Support lead scoring model development on HubSpot CRM. Collaborate with key colleagues to understand and define the most impactful use cases for AI and Machine Learning. Analyse cost/benefit of deploying systems and provide recommendations. Reporting and Visualisation: Create relevant dashboards on MS Power BI, reports, and visualisations to communicate key insights to stakeholders. Present findings in a clear and actionable manner to support decision-making. Collaboration: Work closely with key Group and Regional colleagues to understand challenges and opportunities related to enrolment and retention. Partner with IT and data teams to ensure data integrity and accessibility. Continuous Improvement: Monitor the performance of AI models and analytics tools, making necessary adjustments to improve accuracy and relevance. Stay updated with the latest advancements in AI, data analytics, and education trends. Skills, Qualifications And Experience Education: Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field (Master’s preferred). Experience: At least 2 years’ experience in data analytics, preferably in education or a related field Experience in implementing predictive models - propensity models and interpreting their results. Strong Python skills for statistical modelling, including logistic regression, clustering, and decision trees. Hands-on experience with Azure Databricks is highly preferred. Strong working knowledge of Power BI for building automated and interactive dashboards. Hands-on experience with AI/ML tools and frameworks and currently employed in an AI/ML role. Proficiency in SQL, Python, R, or other data analytics languages. Skills and preferred attributes: Strong understanding of statistical methods and predictive analytics. Proficiency in data visualization tools (e.g., Tableau, Power BI, or similar). Excellent problem-solving, critical thinking, and communication skills. Ability to work collaboratively with diverse teams. Experience in education technology or student success initiatives. Familiarity with CRM or student information systems. Knowledge of ethical considerations in AI and data privacy laws. ISP Commitment to Safeguarding Principles ISP is committed to safeguarding and promoting the welfare of children and young people and expects all staff and volunteers to share this commitment. All post holders are subject to appropriate vetting procedures, including an online due diligence search, references and satisfactory Criminal Background Checks or equivalent covering the previous 10 years’ employment history. ISP Commitment to Diversity, Equity, Inclusion, and Belonging ISP is committed to strengthening our inclusive culture by identifying, hiring, developing, and retaining high-performing teammates regardless of gender, ethnicity, sexual orientation and gender expression, age, disability status, neurodivergence, socio-economic background or other demographic characteristics. Candidates who share our vision and principles and are interested in contributing to the success of ISP through this role are strongly encouraged to apply.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough