Jobs
Interviews

898 Preprocess Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

Remote

Job Title: Data Analyst Intern Company: Enerzcloud Solutions Location: Remote Job Type: Internship (3 months) Stipend: ₹23,000/month Department: Data & Analytics About Us Enerzcloud Solutions is a fast-growing, technology-driven company focused on data innovation and cloud-based business solutions. We help organizations make smarter decisions by turning raw data into actionable insights. Join us as we shape a future powered by data. Internship Overview We are looking for a passionate and motivated Data Analyst Intern to join our dynamic Data & Analytics team. This internship will provide hands-on experience working with real-world datasets and business problems. You’ll work under the guidance of experienced analysts and gain practical skills in data analysis, reporting, and visualization. Key Responsibilities Collect, clean, and preprocess structured and unstructured data from various sources Analyze datasets to uncover trends, patterns, and insights Create dashboards and visual reports using tools like Excel, Power BI, or Tableau Assist in preparing reports and presentations for internal stakeholders Collaborate with cross-functional teams including business, marketing, and tech teams Stay updated with the latest trends and tools in data analytics Required Skills & Qualifications Pursuing or recently completed a Bachelor’s degree in Computer Science, Statistics, Mathematics, or a related field Basic knowledge of Excel, SQL, and one data visualization tool (Power BI/Tableau preferred) Familiarity with Python or R for data analysis is a plus Strong analytical thinking and attention to detail Good communication and teamwork skills Eagerness to learn and grow in a data-driven environment Perks & Benefits ₹23,000 monthly stipend Work from home flexibility Hands-on experience with real datasets and projects Mentorship from experienced data professionals Certificate of completion Top-performing interns may be offered a full-time position after successful completion of the internship

Posted 4 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Coimbatore, Tamil Nadu

On-site

Job description We are looking for 2+ years exp AI Developer to work at our organisation Job Description: We are looking for passionate and talented AI to join our team and work on cutting-edge artificial intelligence and machine learning projects. This role will provide hands-on experience in AI model development, data processing, and deploying intelligent solutions. Key Responsibilities: Develop, train, and test machine learning and deep learning models Work with large datasets, preprocess data, and optimize models Implement AI algorithms for real-world applications such as NLP, computer vision, and predictive analytics Collaborate with cross-functional teams to integrate AI solutions into products Stay updated with the latest AI advancements and research Write clean, efficient, and well-documented code Required Skills: Strong understanding of machine learning, deep learning, and AI concepts Proficiency in Python and AI libraries Experience with data preprocessing, feature engineering, and model evaluation Familiarity with cloud platforms (AWS, Azure, or Google Cloud) is a plus Excellent problem-solving and analytical skills Strong communication and teamwork abilities Job Types: Full-time, Permanent Job Types: Full-time, Permanent Benefits: Provident Fund Location Type: In-person Schedule: Day shift Morning shift Application Question(s): How many years of AI Developer experience your having? Current Salary and Salary Expectation Notice period duration Experience: ML and Deep learning: 2 years (Preferred) Location: Coimbatore, Tamil Nadu (Required) Work Location: In person

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

5 - 8 Years' Experience With Over 8 years of web development experience, including 5+ years of expert proficiency in Drupal services such as working with core services, extending or overriding services, creating new services, and managing services at the codebase level. Design, develop, test, and deliver technology solutions by translating requirements into functional, user-friendly websites for internal and external customers. Understand and adhere to the Software Development Life Cycle (SDLC), ensuring timely completion and production readiness of all projects. Create and deliver design specifications and test plans for development projects. Analyze, troubleshoot, debug, and resolve website and system issues efficiently. Participate actively in team meetings, coaching sessions, and problem-solving discussions, contributing to peer code reviews. Interact effectively with business stakeholders to gather requirements and ensure alignment with business needs. Enforce and contribute to best practices that improve efficiency and reduce technical debt across our websites and systems. Establish and follow technical standards for application development, providing proper documentation. Demonstrate a strong understanding of security principles as they relate to web applications. Provide coaching, mentoring, and training to less experienced team members. Collaborate with various internal departments and external organizations, including vendors, to implement and integrate applications while resolving production issues. Exhibit advanced functional knowledge of assigned applications and take on administration and support responsibilities. Complete tasks proactively, seeking high-quality outcomes, and promptly responding to assignments and deadlines. Prepare and provide status and progress reports for all assigned work activities. Technical Requirements 4+ years of experience in Web Development. 4+ years of experience with Drupal, including : Proficiency with Drupal services : Working with core services Extending and overriding services Creating new services Managing and adding services at the codebase level Proficiency in Drupal configuration management and entity management : Configuration updates Database updates Managing configurations for multiple environments Proficiency with Drupal modules : Creating new modules Understanding Drupal hooks and hook hierarchy Theming through custom modules Creating and adding libraries Understanding of Drupal theming : Knowledge of theme namespacing Proficiency in code-based theming concepts (e.g., preprocess hooks) Composer project management expertise : Understanding how the lockfile and versioning work (important for security scans) Ability to find, create, and apply patches to core and contributed modules Proficiency in HTML, CSS, and JavaScript. Strong PHP skills, including object-oriented programming. Experience working with APIs. High proficiency with Git : Comfortable conducting code reviews and mentoring junior developers Proficient in merge management and conflict resolution Familiarity with various caching layers (e.g., OPCache, Memcache, Redis, Varnish). Experience with database management and SQL. Proficiency with web development and debugging tools. Ability to read and interpret syslogs. Skilled in creating and debugging Docker environments. Strong commitment to security and performance best practices. Adherence to coding standards and thorough documentation. Excellent time management and problem-solving skills. Ability to work independently and collaboratively across teams. Strong written and verbal communication skills. Enthusiasm for learning new tools and techniques and for sharing knowledge. Experience with Azure service creation and management. Ability to analyze application interactions and troubleshoot issues. Skilled in collaborating with teams managing dependent services to stay informed about changes that may impact applications. Familiarity with Node.js and React.js. Active participation in the Drupal community. Skills Drupal 7 to Drupal 10 Migration Drupal, Web Development, Configuration,Git (ref:hirist.tech)

Posted 4 weeks ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

We are looking for a talented AI Developer to join our growing team. As an AI Developer, you will be responsible for designing, developing, and implementing AI solutions that meet our clients' needs. You Will Collaborate With Cross-functional Teams To Integrate AI Models Into Our Products And Services, Leveraging Cutting-edge Technologies To Drive Innovation And Deliver Impactful Design and develop AI models and algorithms to solve business problems. Implement machine learning and deep learning algorithms for various applications. Collaborate with data scientists and engineers to preprocess data and build datasets for training models. Integrate AI models into software applications and platforms. Optimize AI models for performance, scalability, and efficiency. Conduct research to stay current with the latest advancements in AI and machine learning technologies. Test and validate AI models to ensure accuracy and reliability. Deploy AI models into production environments and monitor their performance. Collaborate with product managers and stakeholders to understand project requirements and propose AI solutions. Requirements Bachelor's degree in Computer Science, Engineering, Mathematics, or related field; Master's degree or PhD preferred. Proficiency in programming languages such as Python, Java, or C++. Hands-on experience with AI frameworks and libraries (e., TensorFlow, PyTorch, scikit-learn). Strong understanding of machine learning algorithms and techniques (e., supervised learning, unsupervised learning, reinforcement learning). Experience with data preprocessing, feature engineering, and model evaluation. Knowledge of cloud platforms (e., AWS, Azure, GCP) and familiarity with deploying AI models in cloud environments. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication skills and ability to collaborate effectively with cross-functional teams. Preferred Qualifications Experience with natural language processing (NLP) and computer vision applications. Familiarity with big data technologies (e., Hadoop, Spark) for handling large-scale datasets. Understanding of DevOps practices and experience with CI/CD pipelines. Publications or contributions to the AI and machine learning community (e., research papers, open-source projects). (ref:hirist.tech)

Posted 4 weeks ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

We are currently seeking a talented Python Developer with a strong foundation in software development and a keen interest in artificial intelligence and machine learning. While AI/ML knowledge is not mandatory, it is considered an asset for this role. As a Python Developer at EXL you will have the opportunity to work on diverse projects and collaborate with cross-functional teams to deliver high-quality solutions. Responsibilities Develop and maintain scalable and robust Python applications and services. Collaborate with software engineers, data scientists, and other stakeholders to integrate AI/ML components into software solutions. Assist in implementing AI/ML algorithms and models using Python-based libraries and frameworks. Participate in code reviews, testing, and debugging activities to ensure the quality and reliability of software products. Stay updated on emerging technologies and trends in AI/ML to contribute insights and ideas for enhancing our products and services. Work closely with data engineers to access, preprocess, and analyze data for AI/ML model development. Document code, processes, and best practices to facilitate knowledge sharing and collaboration within the team. Provide support and assistance to other team members as needed. Qualifications Bachelor’s or master’s degree in computer science, Engineering, or related field. Strong proficiency in Python programming language. Familiarity with software development methodologies, tools, and best practices. Understanding of basic concepts in artificial intelligence and machine learning is good to have. Strong proficiency in python programming for ML development Hand on experience working with ML frameworks (Tensor, Scikit, etc.) Knowledge of Azure cloud and especially working with Azure ML studio and cognitive services. Knowledge on working with SQL, NO SQL Databases and REST APIs Knowledge on Azure OpenAI is good have and preferred. Dataset preparation and cleansing for model creation. Working knowledge of different types of data (structured, semi-structured, and unstructured) Expertise in python frameworks such as Fast API, Flask and Django. Working with huge data sets and data analysis with Pandas and NumPy Working with Python ORM Libraries Ability to handle large datasets. Ability to work independently and collaboratively in a fast-paced environment. Excellent problem-solving skills and attention to detail. Effective communication and interpersonal skills. While prior experience or knowledge in AI/ML is preferred, we welcome candidates who are passionate about learning and growing in this field. If you are a talented Python Developer looking to expand your skills and contribute to exciting projects, we encourage you to apply and join our dynamic team at EXL.

Posted 1 month ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

We are currently seeking a talented Python Developer with a strong foundation in software development and a keen interest in artificial intelligence and machine learning. While AI/ML knowledge is not mandatory, it is considered an asset for this role. As a Python Developer at EXL you will have the opportunity to work on diverse projects and collaborate with cross-functional teams to deliver high-quality solutions. Responsibilities Develop and maintain scalable and robust Python applications and services. Collaborate with software engineers, data scientists, and other stakeholders to integrate AI/ML components into software solutions. Assist in implementing AI/ML algorithms and models using Python-based libraries and frameworks. Participate in code reviews, testing, and debugging activities to ensure the quality and reliability of software products. Stay updated on emerging technologies and trends in AI/ML to contribute insights and ideas for enhancing our products and services. Work closely with data engineers to access, preprocess, and analyze data for AI/ML model development. Document code, processes, and best practices to facilitate knowledge sharing and collaboration within the team. Provide support and assistance to other team members as needed. Qualifications Bachelor’s or master’s degree in computer science, Engineering, or related field. Strong proficiency in Python programming language. Familiarity with software development methodologies, tools, and best practices. Understanding of basic concepts in artificial intelligence and machine learning is good to have. Strong proficiency in python programming for ML development Hand on experience working with ML frameworks (Tensor, Scikit, etc.) Knowledge of Azure cloud and especially working with Azure ML studio and cognitive services. Knowledge on working with SQL, NO SQL Databases and REST APIs Knowledge on Azure OpenAI is good have and preferred. Dataset preparation and cleansing for model creation. Working knowledge of different types of data (structured, semi-structured, and unstructured) Expertise in python frameworks such as Fast API, Flask and Django. Working with huge data sets and data analysis with Pandas and NumPy Working with Python ORM Libraries Ability to handle large datasets. Ability to work independently and collaboratively in a fast-paced environment. Excellent problem-solving skills and attention to detail. Effective communication and interpersonal skills. While prior experience or knowledge in AI/ML is preferred, we welcome candidates who are passionate about learning and growing in this field. If you are a talented Python Developer looking to expand your skills and contribute to exciting projects, we encourage you to apply and join our dynamic team at EXL.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 5th July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 1 month ago

Apply

7.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description In This Role, Your Responsibilities Will Be: Analyze large, complex data sets using statistical methods and machine learning techniques to extract meaningful insights. Develop and implement predictive models and algorithms to solve business problems and improve processes. Create visualizations and dashboards to effectively communicate findings and insights to stakeholders. Work with data engineers, product managers, and other team members to understand business requirements and deliver solutions. Clean and preprocess data to ensure accuracy and completeness for analysis. Prepare and present reports on data analysis, model performance, and key metrics to stakeholders and management. Participate in regular Scrum events such as Sprint Planning, Sprint Review, and Sprint Retrospective Stay updated with the latest industry trends and advancements in data science and machine learning techniques Who You are: You must be committed to self-development means you must look for ways to build skills that you will need in the future. You must learn and grow from experience. Opportunities will be available and you must be able to stretch yourself to execute better and be flexible to take up new activities.. For This Role, You Will Need: Bachelor’s degree in computer science, Data Science, Statistics, or a related field or a master's degree or higher is preferred. Total 7-10 years of industry experience More than 5 years of experience in a data science or analytics role, with a strong track record of building and deploying models. Excellent understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience with NLP, NLG, and Large Language Models like – GPT , BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc. Proficiency in programming languages such as Python or R, and experience with data manipulation libraries (e.g., pandas, NumPy). Experience with machine learning frameworks and libraries such as Go, TensorFlow, PyTorch Familiarity with data visualization tools (e.g., Tableau, Power BI, Matplotlib, Seaborn). Experience with SQL and NoSQL databases such as MongoDB, Cassandra, Vector databases Strong analytical and problem-solving skills, with the ability to work with complex data sets and extract actionable insights. Excellent verbal and written communication skills, with the ability to present complex technical information to non-technical stakeholders. Preferred Qualifications that Set You Apart: Prior experience in engineering domain would be nice to have Prior experience in working with teams in Scaled Agile Framework (SAFe) is nice to have Possession of relevant certification/s in data science from reputed universities specializing in AI. Familiarity with cloud platforms, Microsoft Azure is preferred Ability to work in a fast-paced environment and manage multiple projects simultaneously. Strong analytical and troubleshooting skills, with the ability to resolve issues related to model performance and infrastructure. Our Commitment to Diversity, Equity & Inclusion At Emerson, we are committed to fostering a culture where every employee is valued and respected for their unique experiences and perspectives. We believe a diverse and inclusive work environment contributes to the rich exchange of ideas and diversity of thoughts, that inspires innovation and brings the best solutions to our customers. This philosophy is fundamental to living our company’s values and our responsibility to leave the world in a better place. Learn more about our Culture & Values and about Diversity, Equity & Inclusion at Emerson . If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com .

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Exp : 15Yrs to 20yrs Primary skill :- GEN AI Architect, Building GEN AI solutions, Coding, AI ML background, Data engineering, Azure or AWS cloud. Job Description : The Generative Solutions Architect will be responsible for designing and implementing cutting-edge generative AI models and systems. He / She will collaborate with data scientists, engineers, product managers, and other stakeholders to develop innovative AI solutions for various applications including natural language processing (NLP), computer vision, and multimodal learning. This role requires a deep understanding of AI/ML theory, architecture design, and hands-on expertise with the latest generative models. Key Responsibilities : GenAI application conceptualization and design: Understand the use cases under consideration, conceptualization of the application flow, understanding the constraints and designing accordingly to get the most optimized results. Deep knowledge to work on developing and implementing applications using Retrieval-Augmented Generation (RAG)-based models, which combine the power of large language models (LLMs) with information retrieval techniques. Prompt Engineering: Be adept at prompt engineering and its various nuances like one-shot, few shot, chain of thoughts etc and have hands on knowledge of implementing agentic workflow and be aware of agentic AI concepts NLP and Language Model Integration - Apply advanced NLP techniques to preprocess, analyze, and extract meaningful information from large textual datasets. Integrate and leverage large language models such as LLaMA2/3, Mistral or similar offline LLM models to address project-specific goals. Small LLMs / Tiny LLMs: Familiarity and understanding of usage of SLMs / Tiny LLMs like phi3, OpenELM etc and their performance characteristics and usage requirements and nuances of how they can be consumed by use case applications. Collaboration with Interdisciplinary Teams - Collaborate with cross-functional teams, including linguists, developers, and subject matter experts, to ensure seamless integration of language models into the project workflow. Text / Code Generation and Creative Applications - Explore creative applications of large language models, including text / code generation, summarization, and context-aware responses. Skills & Tools Programming Languages - Proficiency in Python for data analysis, statistical modeling, and machine learning. Machine Learning Libraries - Hands-on experience with machine learning libraries such as scikit-learn, Huggingface, TensorFlow, and PyTorch. Statistical Analysis - Strong understanding of statistical techniques and their application in data analysis. Data Manipulation and Analysis - Expertise in data manipulation and analysis using Pandas and NumPy. Database Technologies - Familiarity with vector databases like ChromaDB, Pinecone etc, SQL and Non-SQL databases and experience working with relational and non-relational databases. Data Visualization Tools - Proficient in data visualization tools such as Tableau, Matplotlib, or Seaborn. Familiarity with cloud platforms (AWS, Google Cloud, Azure) for model deployment and scaling. Communication Skills - Excellent communication skills with the ability to convey technical concepts to non-technical audiences.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 4 July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 1 month ago

Apply

162.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the job Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business outcomes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the multibillion-dollar CK Birla Group, Birlasoft, boasting a team of 12,500+ professionals, is committed to carrying forward the Group's illustrious 162-year legacy. At our core, we prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, ensuring our commitment to building not just businesses but inclusive and sustainable communities. Join us in shaping a future where technology aligns seamlessly with purpose. Gen AI LLM Data science · Must have 6+ years of experience working in Data science, Machine learning and especially NLP technologies. · Exposure to various LLM technologies and solid understanding of Transformer Encoder Networks. · Able to apply deep learning and generative modeling techniques to develop LLM solutions in the field of Artificial Intelligence. · Utilize your extensive knowledge and expertise in machine learning (ML) with a focus on generative models, including but not limited to generative adversarial networks (GANs), variational autoencoders (VAEs), and transformer-based architectures. · Solid understanding of Model development, model serving, training/re-training techniques in a data sparse environment. · Very good understanding of Prompt engineering techniques in developing Instruction based LLMs. · Must be able to design, and implement state-of-the-art generative models for natural language processing (NLP) tasks such as text generation, text completion, language translation, and document summarization. · Work with SAs and collaborate with cross-functional teams to identify business requirements and deliver solutions that meet the customer needs. · Passionate to learn and stay updated with the latest advancements in generative AI and LLM. · Nice to have -contributions to the research community through publications, presentations, and participation in relevant conferences or workshops. · Evaluate and preprocess large-scale datasets, ensuring data quality and integrity, and develop data pipelines for training and evaluation of generative models. · Ability to articulate to business stakeholders on the hallucination effects and various model behavioral analysis techniques followed. · Exposure to developing Guardrails for LLMs both with open source and cloud native models. · Collaborate with software engineers to deploy and optimize generative models in production environments, considering factors such as scalability, efficiency, and real-time performance. · Nice to have- provide guidance to junior data scientists, sharing expertise and knowledge in generative AI and LLM, and contribute to the overall growth and success of the data science team.

Posted 1 month ago

Apply

0 years

0 Lacs

Nagpur, Maharashtra, India

On-site

About Us Fireblaze AI School is a part of Fireblaze Technologies which was started in April 2018 with a Vision to Up-Skill and Train in emerging technologies. Mission Statement “To Provide Measurable & Transformational Value To Learners Career” Vision Statement ““To Be The Most Successful & Respected Job-Oriented Training Provider Globally.” We Focus widely on creating a huge digital impact. Hence Our Strong Presence over Digital Platforms are a must have thing for use. Below Are the links to all the social media channels. Website - http://fireblazeaischool.in LinkedIN - https://www.linkedin.com/company/fireblazeaischool/ Youtube - https://www.youtube.com/c/FireblazeAISchool Facebook - https://www.facebook.com/fireblazeaischool/ Instagram - https://www.instagram.com/fireblazeaischool/ Twitter - https://twitter.com/FireblazeAi Google Nagpur - https://g.page/fireblazeaischoolnagpur?share Google Pune - https://g.page/fireblazeaischoolpune?share Spotify Podcast - https://open.spotify.com/show/0VXm4ikfRG29UcwlMnaDNJ Job Description We are seeking a motivated and detail-oriented AI Intern to join our Artificial Intelligence team. This role provides hands-on experience working on cutting-edge AI and machine learning projects. The ideal candidate is passionate about data, algorithms, and innovation. Key Responsibilities Assist in designing, developing, and testing machine learning models. Analyze and preprocess datasets for use in AI experiments. Support deployment of AI models into production environments or internal tools. Collaborate with data scientists, software engineers, and product teams. Document experiments, findings, and technical workflows. Contribute to literature reviews and stay current with AI/ML research trends Requirements Currently pursuing a degree in Computer Science, Data Science, Artificial Intelligence, Engineering, or related field. Excellent communication and teamwork abilities. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#FF6A0E;border-color:#FF6A0E;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python or R for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 4th July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 4th July 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Kenvue is currently recruiting for a: Senior Business Analyst, Data Science What we do At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Who We Are Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Director Location: Asia Pacific, India, Karnataka, Bangalore Work Location: Hybrid What you will do About Kenvue: Kenvue is the world’s largest pure-play consumer health company by revenue. Built on more than a century of heritage, our iconic brands, including Aveeno®, Johnson’s®, Listerine®, and Neutrogena® are science-backed and recommended by healthcare professionals around the world. At Kenvue, we believe in the extraordinary power of everyday care and our teams work every day to put that power in consumers’ hands and earn a place in their hearts and homes. Kenvue is currently recruiting for: Senior Business Analyst, Data Science, Digital Transformation Office This position reports to Senior Data Science Manager and is based at Bangalore. Role reports to: Senior Data Science Manager Location: Bangalore Travel %: 10% What you will do: The Senior Business Analyst, Data Science will work to deliver Optimization solution to cross-functional Supply Chain teams. This individual will work with cross-functional subject matter experts (SMEs) to deeply understand business context and key business questions / opportunities, as well as collaborate with other teams (e.g., Digital Capability teams, Data Engineering, Data Quality, Data Management & Governance, ML Ops) to ensure delivery of scalable data science solutions. The ideal candidate for this role will demonstrate a combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of the project and excellent written and communications skills to report back the findings in a clear, structured manner Key Responsibilities: Mathematical Modeling and Optimization: Develop, refine and review mathematical models to represent supply chain systems, including inventory management, production planning, transportation logistics, and distribution networks. Apply advanced optimization techniques, such as linear programming, integer programming, network flow, simulation, and heuristic algorithms, to solve complex supply chain problems Conduct sensitivity analysis, scenario modeling, and risk assessment to evaluate the impact of various factors on supply chain performance. Collaborate with stakeholders to understand business objectives, constraints, and requirements, and translate them into mathematical models and optimization problems. Data Analysis and Insights: Analyze large datasets, extract relevant information, and identify patterns and trends to support decision-making processes. Collaborate with data scientists and business analysts to gather and preprocess data from various sources, ensuring data accuracy and integrity. Generate actionable insights and recommendations based on data analysis to optimize supply chain operations, reduce costs, and improve customer service levels. Solution Deployment: Present findings, insights, and recommendations in a clear and concise manner. Make solution recommendations that appropriately balance speed to market and analytical soundness Work with internal stakeholders like Data Engineers, Data Scientist, Business Analyst, Project Manager to ensure that product is tested and deployed on time. Research and Innovation: Stay updated with the latest developments in operations research, supply chain management, and optimization techniques. Conduct research and explore innovative approaches to address supply chain challenges and drive continuous improvement. What we are looking for: Master in Industrial Engineering, Operations Research or Management Science or related field with 3+ years of total work experience in Supply Chain Optimization Proficiency in mathematical modeling and optimization techniques, such as linear programming, integer programming, network flow, simulation, and heuristic algorithms. Strong programming skills in languages such as Python, R, or MATLAB, with experience in optimization libraries (e.g., Gurobi,FICO, CPLEX) and data manipulation tools (e.g., pandas, NumPy). Experience with data analysis, statistical modeling, and visualization using tools like SQL, Tableau, or Power BI. Knowledge of supply chain concepts, including demand forecasting, inventory management, production planning, transportation logistics, and distribution networks. If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.

Posted 1 month ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Kenvue is currently recruiting for a: Sr. Analyst, Data Science What we do At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Who We Are Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Senior Manager - Data Science Location: Asia Pacific, India, Karnataka, Bangalore Work Location: Hybrid What you will do About Kenvue: Kenvue is the world’s largest pure-play consumer health company by revenue. Built on more than a century of heritage, our iconic brands, including Aveeno®, Johnson’s®, Listerine®, and Neutrogena® are science-backed and recommended by healthcare professionals around the world. At Kenvue, we believe in the extraordinary power of everyday care and our teams work every day to put that power in consumers’ hands and earn a place in their hearts and homes. Sr. Analyst, Data Science This position reports to the Manager, Data Science and is based at Bengaluru, India. Role reports to: Manager, Data Science Location: Bengaluru, India Travel %: 10% What you will do: As Sr. Analyst, Data Science, you will develop and implement data science and machine learning solutions to solve complex business problems within Kenvue Operations. In this role, you will work closely with cross-functional teams to analyze large datasets, identify patterns, and generate insights that drive business value. The ideal candidate for this role will demonstrate a combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of the project and excellent written and communications skills to report back the findings in a clear, structured manner Key Responsibilities: Develop, refine and review mathematical models to represent supply chain systems, including inventory management, production planning, transportation logistics, and distribution networks. Apply various data science techniques (e.g., Operations Research, Advanced Forecasting, Machine Learning, and Artificial Intelligence) to solve complex supply chain problems Collaborate with stakeholders to understand business objectives, constraints, and requirements, and translate them into mathematical models and optimization problems. Collaborate with data scientists and business analysts to gather and preprocess data from various sources, ensuring data accuracy and integrity. Analyze large datasets, extract relevant information, and identify patterns and trends to support decision-making processes. Follow code version through bitbucket / GIT, documenting work on Confluence. Build APIs for seamless integration with application. Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment?? Stay up to date on the latest technological developments across data science, machine learning, operations research, optimization techniques, etc. Conduct research and explore innovative approaches to address supply chain challenges and drive continuous improvement. What We Are Looking For Required Qualifications: Masters degree in Computer Science, Operations Research, Management Science or other relevant field 4 – 6 years of business experience, with at least 4 years of experience working as a data science Proficiency in mathematical modelling and optimization techniques, such as linear programming, integer programming, network flow, simulation, and heuristic algorithms. Proficiency with various data science technologies and modeling techniques Strong programming skills in languages such as Python with experience in optimization libraries and data manipulation tools (e.g., pandas, NumPy). Experience with Microsoft Azure or AWS data management tools such as Azure Data factory, data lake, AzureML, Synapse, Databricks Experience with data analysis, statistical modelling, and visualization using tools like SQL, Streamlit Desired Qualifications: Knowledge of supply chain concepts and processes, especially within Procurement function and/or Procure to Pay value stream is strongly preferred Microsoft certification (e.g., Azure Fundamentals) is preferred Machine Learning certification and/or hands on experience developing ML-driven models is preferred APICS certification (e.g., CPIM, CSCP) is preferred Experience in creating CI/CD pipelines for deployment using Jenkins is preferred Kenvue is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, protected veteran status, or any other legally protected characteristic, and will not be discriminated against on the basis of disability. If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

BTM Layout, Bengaluru, Karnataka

On-site

Job Title: Python Developer – Machine Learning & AI (2–3 Years Experience) Job Summary: We are seeking a skilled and motivated Python Developer with 2 to 3 years of experience in Machine Learning and Artificial Intelligence. The ideal candidate will have hands-on experience in developing, training, and deploying machine learning models, and should be proficient in Python and associated data science libraries. You will work with our data science and engineering teams to build intelligent solutions that solve real-world problems. Key Responsibilities: Develop and maintain machine learning models using Python. Work on AI-driven applications, including predictive modeling, natural language processing, and computer vision (based on project requirements). Collaborate with cross-functional teams to understand business requirements and translate them into ML solutions. Preprocess, clean, and transform data for training and evaluation. Perform model training, tuning, evaluation, and deployment using tools like scikit-learn, TensorFlow, or PyTorch. Write modular, efficient, and testable code. Document processes, models, and experiments clearly for team use and future reference. Stay updated with the latest trends and advancements in AI and machine learning. Required Skills: 2–3 years of hands-on experience with Python programming. Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn , pandas , NumPy , Matplotlib , and Seaborn . Exposure to deep learning frameworks like TensorFlow , Keras , or PyTorch . Good understanding of data structures and algorithms. Experience with model evaluation techniques and performance metrics. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Strong analytical and problem-solving skills. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or related field. Experience with deploying ML models using Flask , FastAPI , or Docker . Knowledge of MLOps and model lifecycle management is an advantage. Understanding of NLP or Computer Vision is a plus. Job Type: Full-time Pay: Up to ₹700,000.00 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Ability to commute/relocate: BTM Layout, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn, pandas, NumPy, Matplotlib, and Seaborn. Exposure to deep learning frameworks like TensorFlow, Keras, or PyTorch. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Experience with deploying ML models using Flask, FastAPI, or Docker. what is your CTC ( in lpa ) What is your Expected CTC ( in lpa ) what is your notice period Location: BTM Layout, Bengaluru, Karnataka (Required) Work Location: In person Application Deadline: 06/07/2025

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru South, Karnataka, India

On-site

Key Responsibilities: 1. Machine Learning Development Design, build, and evaluate supervised and unsupervised ML models (e.g., regression, classification, clustering, recommendation systems). Perform feature engineering, model tuning, and validation using cross-validation and performance metrics. 2. Data Preparation & Analysis Clean, preprocess, and transform large datasets from various sources. Conduct exploratory data analysis (EDA) to uncover patterns and insights. 3. Model Deployment & Monitoring Deploy ML models into production using tools like Flask, FastAPI, or cloud-native services. Monitor model performance and retrain/update models as needed. 4. Collaboration & Communication Work closely with data engineers, product managers, and business stakeholders to understand requirements and deliver impactful solutions. Present findings and model outcomes in a clear, actionable manner. 5. Tools & Technologies Use Python and libraries such as scikit-learn, XGBoost, TensorFlow, or PyTorch. Leverage version control (Git), Jupyter notebooks, and ML lifecycle tools (MLflow, DVC). Preferred Qualifications: Bachelor’s or master’s degree in computer science, Data Science, Statistics, or a related field. 2–3 years of experience in building and deploying ML models. Strong programming skills in Python; familiarity with SQL. Solid understanding of ML concepts, model evaluation, and statistical techniques. Exposure to cloud platforms (AWS, GCP, or Azure) and MLOps practices is a plus. Excellent problem-solving and communication skills.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

The Role We are looking for a highly skilled Data Engineer with strong expertise in Python programming, data processing, and analytical problem-solving. This role requires a blend of analytical skills, engineering capabilities, and hands-on data manipulation to derive actionable insights, build efficient pipelines, and support data-driven decision-making across teams. Responsibilities Data Exploration & Analysis: Analyze large and complex datasets to extract meaningful insights and drive decision-making processes. Identify data trends, anomalies, and opportunities for improvement within datasets and communicate findings clearly to stakeholders. Collaborate with cross-functional teams to understand business requirements and transform them into technical solutions. Data Pipeline Development Design, develop, and maintain robust data pipelines for efficient data ingestion, transformation, and storage. Optimize and automate data workflows to improve data availability, quality, and processing efficiency. Implement ETL (Extract, Transform, Load) processes to support analytics and reporting needs. Data Modeling & Feature Engineering Build, validate, and maintain data models to support machine learning and statistical analysis needs. Engineer and preprocess features for machine learning algorithms and ensure data quality and consistency. Develop scalable solutions for feature storage, retrieval, and real-time model serving. Programming & Scripting Write efficient, scalable, and well-documented Python code to support data engineering and analysis tasks. Collaborate on code reviews, optimize code performance, and apply best practices in coding and version control. Use Python libraries (e.g., Pandas, NumPy, SQLAlchemy) to streamline data workflows and support analysis. Performance Optimization & Troubleshooting Monitor, troubleshoot, and enhance the performance of data systems and pipelines. Address data integrity and pipeline issues promptly to ensure reliable data availability and system uptime. Implement monitoring and logging to preemptively detect and resolve issues. Collaboration & Communication Work closely with data scientists, analysts, and other engineers to develop cohesive data solutions. Translate complex technical issues into non-technical language for clear communication with stakeholders. Contribute to documentation, data standards, and best practices to foster a data-centric culture. Job Requirements Technical Skills: Strong proficiency in Python and familiarity with data processing libraries (e.g., Pandas, NumPy, PySpark). Experience with SQL for data extraction and manipulation. Data Engineering Knowledge: Experience in designing, building, and managing data pipelines, ETL workflows, and data warehousing solutions. Statistical & Analytical Skills: Ability to apply statistical methods for data analysis and familiarity with machine learning concepts. Problem-Solving Mindset: Proven ability to troubleshoot complex data issues and continuously improve workflows for efficiency and accuracy. Communication: Effective communication skills to convey data insights to technical and non-technical stakeholders alike. Bonus: Experience with cloud platforms (e.g., AWS, GCP), containerization (e.g., Docker), and orchestration tools (e.g., Airflow) is a plus.\ Preferred Education & Experience Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, Mathematics, or a related field. 3+ years of experience in a data science or data engineering role. Benefits Compensation commensurate with experience Unlimited vacation Ongoing education and training Bonuses and profit-sharing

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Key Responsibilities Extract, transform, and load (ETL) large-scale datasets from diverse sources using big data tools and frameworks. Cleanse and preprocess structured and unstructured data to maintain consistency, accuracy, and usability. Automate data ingestion and processing pipelines using scripting and workflow orchestration tools. Conduct exploratory data analysis to uncover patterns, correlations, and anomalies in large datasets. Apply advanced statistical and machine learning techniques to extract insights and support data-driven decisions. Build and maintain interactive dashboards and data visualizations using tools like Power BI, Tableau, or similar platforms. Collaborate with data engineers, analysts, and business stakeholders to translate data insights into actionable strategies. Ensure data quality by validating completeness, accuracy, and consistency across datasets. Monitor and resolve data discrepancies, applying automated quality control checks where applicable. Support business intelligence efforts by defining KPIs, analyzing trends, and modeling future scenarios. Work with cross-functional teams to improve operational efficiency through data insights. Stay updated with emerging technologies in big data, analytics, and cloud computing. Participate in internal knowledge-sharing sessions and external training to continually enhance data capabilities. Promote best practices in data governance, privacy, and compliance across all data-related activities. About Company: As one of the world's largest private healthcare providers, IHH Healthcare operates over 140 healthcare facilities across 10 countries, including 80+ hospitals, clinics, and ambulatory care centers, and employs over 70,000 skilled professionals to deliver on our aspiration to Care. For Good. For millions of patients every year.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Key Responsibilities Extract, transform, and load (ETL) data from multiple sources. Clean and preprocess datasets to ensure consistency and accuracy. Develop automation scripts and workflows for efficient data collection and processing. Conduct exploratory data analysis to identify trends, patterns, and anomalies. Apply statistical methods and predictive modeling to generate actionable insights. Create interactive dashboards, reports, and data visualizations for stakeholders. Communicate analytical findings clearly to both technical and non-technical teams. Validate data integrity, completeness, and accuracy across all datasets. Identify and resolve data quality issues through regular audits and checks. Define and track KPIs and performance metrics in collaboration with business teams. Analyze business processes to recommend improvements based on data insights. Provide ad-hoc analysis and scenario modeling to support strategic decisions. Stay updated with industry trends, tools, and emerging technologies in analytics. Participate in training and share knowledge to promote continuous learning. Collaborate cross-functionally to embed data-driven decision-making into the organization. About Company: IHH Healthcare is one of the world's largest private healthcare services providers. With an integrated network spanning over 80 hospitals in 10 countries, we deliver on our aspiration to Care. For Good. For millions of patients every year.

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title - Data Science Analyst S&C GN Management Level : Analyst Location: Bangalore / Gurugram / Mumbai / Hyderabad / Chennai Must have skills: Gen AI / ML, SQL, Python, Azure / AWS, ML Ops Good to have skills: Experience in data science projects focused on Utilities, Oil & Gas, Mining, Manufacturing, Chemicals and Forest Product Job Summary: We are seeking a highly skilled and motivated Data Science Analyst to work on innovative projects and drive impactful solutions in domains such as Utilities, Oil & Gas, Mining, Manufacturing, Chemicals and Forest Product . This role requires hands-on technical expertise , and client delivery management skills to execute cutting-edge projects in Generative AI, data science, and cloud-based analytics. Key Responsibilities 1.Data Science and Engineering Perform advanced analytics using Python, SQL, Pyspark using machine learning frameworks. Develop predictive models, recommendation systems, and optimization solutions tailored to business needs. Manage and preprocess large, complex datasets, ensuring efficient pipelines and advanced feature engineering across structured and unstructured data. Build MLOps pipelines for model training / retraining, monitoring, and scalability Dashboard and Reporting Develop dashboards, reports, and insights to track the impact of deployed models on business outcomes in PowerBI/Tableau. Present results and recommendations to stakeholders, leveraging data storytelling to drive decision-making. Cloud Platform Expertise Design and implement end-to-end data science workflows on cloud platforms (e.g., AWS, Azure, GCP) for business-critical projects. Leverage cloud-native tools and services (e.g., Databricks, ADF, Lambda, Glue, AzureML) for training, deploying, and monitoring machine learning models at scale. Generative AI Expertise Lead the development of Generative AI based application and solutions leveraging frameworks like LangChain, LlamaIndex Drive model evaluation strategies using advanced metrics (e.g., BLEU, ROUGE, FID) and iteratively optimize performance for production-grade applications. Architect deployment solutions, including API development and seamless integration with existing systems. Required Qualifications Experience: 1-5 years in data science Education: Bachelor's / master’s degree in computer science, statistics, applied mathematics, or a related field. Industry Knowledge: Preferred experience in Utilities, Oil & Gas, Mining, Manufacturing, Chemicals and Forest Product Technical Skills Programming: Proficiency in Python, SQL and Pyspark. GenAI Expertise: Hands-on experience in building GenAI based applications and solutions Experience in deploying GenAI application in production. Cloud Platforms: Experience with Azure / AWS / GCP Visualization Tools: PowerBI / Tableau Preferred Skills Strong analytical and problem-solving skills with a results-oriented mindset. Good communication and stakeholder management capabilities. Very good in generating business insights and presenting to stakeholders. About Our Company | Accenture Experience: 1-5 years in data science Educational Qualification: Bachelor's / master’s degree in computer science, statistics, applied mathematics, or a related field.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Location: Remote (Work from Home) Duration: 3–6 Months Stipend: Performance-Based Upto 15,000 Company: Zeno Talent Department: Artificial Intelligence & Data Science PPO Opportunity: Yes – High-performing interns will be offered a Pre-Placement Offer (PPO) for a full-time role About Zeno Talent: Zeno Talent is a dynamic IT services and consulting company that delivers advanced technology solutions across domains like Data Science, Artificial Intelligence, ERP, and IT Consulting. Our mission is to connect talent with opportunity while solving real-world business problems using cutting-edge technologies. We value innovation, learning, and professional growth. Job Description: We are seeking a passionate and motivated AI Intern (Remote) to join our Artificial Intelligence & Data Science team. You will work on real-time AI/ML projects, gaining hands-on experience and professional mentorship. This internship is ideal for someone looking to launch their career in AI and grow within a supportive, fast-paced environment. Outstanding interns will receive a Pre-Placement Offer (PPO) for a full-time role at Zeno Talent. Key Responsibilities: Assist in building, training, and fine-tuning machine learning models Clean, preprocess, and analyze datasets from real-world applications Support development of AI solutions using Python and relevant libraries Collaborate with mentors and team members to contribute to live projects Document technical work and report progress regularly Research and stay updated on new AI trends and tools Eligibility & Skills: Currently pursuing or recently completed a degree in Computer Science, Data Science, AI, or related field Solid foundation in Python and libraries like NumPy, Pandas, Scikit-learn Basic understanding of machine learning algorithms Familiarity with data visualization tools (e.g., Matplotlib, Seaborn) Strong problem-solving and analytical skills Willingness to learn, adapt, and take initiative in a remote team environment Bonus (Good to Have): Experience with Git and GitHub Exposure to NLP, deep learning, or computer vision Participation in AI projects, competitions, or hackathons What You’ll Gain: Real-world experience working on live AI projects One-on-one mentorship from experienced professionals Letter of Recommendation & Internship Certificate PPO (Pre-Placement Offer) opportunity for top performers Career guidance and resume/project review sessions

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: Unified Mentor Location: Remote Duration: 3 months Application Deadline: 3rd July 2025 Opportunity: Full-time role based on performance + Internship Certificate About Unified Mentor Unified Mentor provides aspiring professionals with hands-on experience in data science through industry-relevant projects, helping them build successful careers. Responsibilities Collect, preprocess, and analyze large datasets Develop predictive models and machine learning algorithms Perform exploratory data analysis (EDA) to extract insights Create data visualizations and dashboards for effective communication Collaborate with cross-functional teams to deliver data-driven solutions Requirements Enrolled in or a graduate of Data Science, Computer Science, Statistics, or a related field Proficiency in Python or R for data analysis and modeling Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) Familiarity with data visualization tools like Tableau, Power BI, or Matplotlib Strong analytical and problem-solving skills Excellent communication and teamwork abilities Stipend & Benefits Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) Hands-on experience in data science projects Certificate of Internship & Letter of Recommendation Opportunity to build a strong portfolio of data science models and applications Potential for full-time employment based on performance How to Apply Submit your resume and a cover letter with the subject line "Data Science Intern Application." Equal Opportunity: Unified Mentor welcomes applicants from all backgrounds.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At ABB, we help industries outrun - leaner and cleaner. Here, progress is an expectation - for you, your team, and the world. As a global market leader, we’ll give you what you need to make it happen. It won’t always be easy, growing takes grit. But at ABB, you’ll never run alone. Run what runs the world. This Position reports to: Technical Authority In this role, you will have the opportunity to support engineering activities in all types of project sizes and complexities, under the supervision of senior engineers. Each day, you will complete the tasks cost-effectively and in accordance with contract specifications, quality standards, safety requirements, and cybersecurity policies and standards. You will also showcase your expertise by collecting data and supporting production of the engineering design. The work model for the role is: L1-Onsite This role is contributing to the Process Automation Business in Energy Industries in Bangalore, india You will be mainly accountable for: Perform exploratory data analysis, clean and preprocess data, and identify trends and patterns. Develop and implement machine learning models to solve specific business problems, such as predictive analytics, classification, and recommendation systems. Evaluate the performance of machine learning models and fine-tune them for optimal results. Create informative and visually appealing data visualizations to communicate findings and insights to non-technical stakeholders. Conduct statistical analysis, hypothesis testing, and A/B testing to support decision-making processes. Work with data engineers to integrate, transform, and store data from various sources. Qualifications for the role Engineering graduate with exposure to Data science from a reputed institution 2+ Years of experience in AI ML concepts, Python (preferred), prefer knowledge in deep learning frameworks like PyTorch and TensorFlow Domain knowledge of Manufacturing/ process Industries, Physics and first principle based analysis Analytical thinking for translating data into meaningful insight and could be consumed by ML Model for Training and Prediction. Should be able to deploy Model using Cloud services like Azure Databricks or Azure ML Studio. Familiarity with technologies like Docker, Kubernetes and MLflow is good to have. Agile development of customer centric prototypes or ‘Proof of Concepts’ for focused digital solutions Very good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams. More about us The Energy Industries Division serves a wide range of industrial sectors, including hydrocarbons, chemicals, pharmaceuticals, power generation and water. With its integrated solutions that automate, digitalize and electrify operations, the Division is committed to supporting traditional industries in their efforts to decarbonize. The Division also supports the development, integration and scaling up of new and renewable energy models. The Division’s goal is to help customers adapt and succeed in the rapidly changing global energy transition. Harnessing data, machine learning and artificial intelligence (AI), the Division brings over 50 years of domain expertise delivering solutions designed to improve energy, process and production efficiency, as well as reduce risk, operational cost and capital cost, while minimizing waste for customers, from project start-up and throughout the entire plant lifecycle. We value people from different backgrounds. Could this be your story? Apply today or visit www.abb.com to read more about us and learn about the impact of our solutions across the globe. Fraud Warning: Any genuine offer from ABB will always be preceded by a formal application and interview process. We never ask for money from job applicants. For current open positions you can visit our career website https://global.abb/group/en/careers and apply. Please refer to detailed recruitment fraud caution notice using the link https://global.abb/group/en/careers/how-to-apply/fraud-warning.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies