Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: • Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies • Driving the technical discussions with clients along with Project Managers. • Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team • Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. • Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. • Mentoring and coaching junior Python/AI/ML engineers. • Sharing knowledge through knowledge-sharing technical presentations. • Implement new Python/AI features with high quality coding standards. Must-To Have: • B.Tech/B.E. in computer science, IT, Data Science, ML or related field. • Strong proficiency in Python programming language. • Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. • Proficient in Debugging and Exception Handling • Professional experience in developing and operating AI systems in production. • Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) • Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. • Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. • Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. • Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). • Experience in working in Agile methodology Good-To Have: • A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. • Python framework (Django/Flask/Fast API) & API integration. • AI/ML/DL/MLOops certification done by AWS. • Experience with OpenAI API. • Good in Japanese Language
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Cyber Towers, Hyderabad & Vijayawada 🏢 Company: Datavalley India Private Limited 📅 Type: Full-Time (Immediate Joiner) 🎓 Education: B.Tech in Computer Science, Data Science, Artificial Intelligence, or related branches 🧪 Experience: 1 - 5 Years About Datavalley.ai: Datavalley.ai is a forward-thinking AI company revolutionizing industries through advanced Machine Learning and Deep Learning solutions. Based in Hyderabad’s tech core, Cyber Towers, we engineer scalable AI systems that transform raw data into powerful business intelligence. At Datavalley.ai, we train fresh minds into AI professionals who build with impact. Roles & Responsibilities: Design and implement Machine Learning and Deep Learning algorithms. Work on data preprocessing, feature engineering, and model evaluation. Build and deploy models using: - Machine Learning: Linear Regression, Logistic Regression, Decision Trees, Random Forest, XGBoost, SVM, K-Means, PCA - Deep Learning: ANN, CNN, RNN, LSTM, Transfer Learning (ResNet, VGG, BERT) - AI Applications: NLP, Computer Vision, Recommendation Systems Use Python and tools like Scikit-learn, TensorFlow, Keras, PyTorch. Collaborate with teams to deploy models using Flask, FastAPI, Streamlit. Skills Required: Strong programming skills in Python Experience with NumPy, Pandas, Matplotlib, Seaborn Knowledge of Scikit-learn, TensorFlow, Keras, PyTorch Understanding of model evaluation metrics (Accuracy, Precision, Recall, F1-score) Familiarity with SQL, basic knowledge of NoSQL (MongoDB) Comfortable with Jupyter/Colab, Git, and version control Basic exposure to Flask/FastAPI for deployment Qualifications: B.Tech in Computer Science, AI & ML, Data Science, or related disciplines Strong foundation in algorithms, mathematics, and statistics Ability to build, train, and evaluate ML/DL models Project or internship in AI/ML is preferred 📩 To Apply: Send your resume to [ artishukla@datavalley.ai ]
Posted 1 week ago
0 years
0 Lacs
Sonipat, Haryana, India
On-site
About the Role Overview: Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Engineer + Subject Matter Expert – Data Mining to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “BigData” and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering and Anomaly Detections. ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Preferred Qualifications: ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. To know more about us, feel free to explore our website: Newton School of Technology We look forward to the possibility of having you join our academic team and help shape the future of tech education!
Posted 1 week ago
6.0 years
5 - 15 Lacs
Jodhpur Char Rasta, Ahmedabad, Gujarat
On-site
Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies Driving the technical discussions with clients along with Project Managers. Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. Mentoring and coaching junior Python/AI/ML engineers. Sharing knowledge through knowledge-sharing technical presentations. Implement new Python/AI features with high quality coding standards. Must-To Have: B.Tech/B.E. in computer science, IT, Data Science, ML or related field. Strong proficiency in Python programming language. Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. Proficient in Debugging and Exception Handling Professional experience in developing and operating AI systems in production. Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). Experience in working in Agile methodology Good-To Have: A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Python framework (Django/Flask/Fast API) & API integration. AI/ML/DL/MLOops certification done by AWS. Experience with OpenAI API. Good in Japanese Language Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,500,000.00 per year Benefits: Provident Fund Work Location: In person Expected Start Date: 14/08/2025
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Hello, Hope you are doing well. We are Hiring for the role of Sr. AI/ML/ Data Scientist role. Please revert me back with your updated resume at astha.jaiswal@techsciresearch.com if you are interested. Below are the details: Job Location: Noida Job Role: S r. AI/ML/ Data Scientist Overview: We are seeking a highly skilled and experienced AI/ML Expert to spearhead the design, development, and deployment of advanced artificial intelligence and machine learning models. This role requires a strategic thinker with a deep understanding of ML algorithms, model optimization, and production-level AI systems. You will guide cross-functional teams, mentor junior data scientists, and help shape the AI roadmap to drive innovation and business impact. This role involves statistical analysis, data modeling, and interpreting large sets of data. An AI/ML expert with experience in creating AI models for data intelligence companies who specializes in developing and deploying artificial intelligence and machine learning solutions tailored for data-driven businesses. This expert should possess a strong background in data analysis, statistics, programming, and machine learning algorithms, enabling them to design innovative AI models that can extract valuable insights and patterns from vast amounts of data. About the Role The design and development of a cutting-edge application powered by large language models (LLMs). This tool will provide market analysis and generate high-quality, data-driven periodic insights. You will play a critical role in building a scalable and intelligent system that integrates structured data, NLP capabilities, and domain-specific knowledge to produce analyst-grade content. Key Responsibilities Design and develop LLM-based systems for automated market analysis. Build data pipelines to ingest, clean, and structure data from multiple sources (e.g., market feeds, news articles, technical reports, internal datasets). Fine-tune or prompt-engineer LLMs (e.g., GPT-4.5, Llama, Mistral) to generate concise, insightful reports. Collaborate closely with domain experts to integrate industry-specific context and validation into model outputs. Implement robust evaluation metrics and monitoring systems to ensure quality, relevance, and accuracy of generated insights. Develop and maintain APIs and/or user interfaces to enable analysts or clients to interact with the LLM system. Stay up to date with advancements in the GenAI ecosystem and recommend relevant improvements or integrations. Participate in code reviews, experimentation pipelines, and collaborative research discussions. Qualifications Required: Strong fundamentals in machine learning, deep learning, and natural language processing (NLP) . Proficiency in Python , with hands-on experience using libraries such as NumPy, Pandas, and Matplotlib/Seaborn for data analysis and visualization. Experience developing applications using LLMs (both closed- and open-source models) . Familiarity with frameworks like Hugging Face Transformers, LangChain, LlamaIndex , etc. Experience building ML models (e.g., Random Forest, XGBoost, LightGBM, SVMs ), along with familiarity in training and validating models. Practical understanding of deep learning frameworks: TensorFlow or PyTorch . Knowledge of prompt engineering , Retrieval-Augmented Generation (RAG) , and LLM evaluation strategies. Experience working with REST APIs , data ingestion pipelines, and automation workflows. Strong analytical thinking, problem-solving skills, and the ability to convert complex technical work into business-relevant insights. Preferred: Familiarity with the chemical or energy industry , or prior experience in market research/analyst workflows . Exposure to frameworks such as OpenAI Agentic SDK, CrewAI, AutoGen, SmolAgent, etc. Experience deploying ML/LLM solutions to production environments (Docker, CI/CD) . Hands-on experience with vector databases such as FAISS, Weaviate, Pinecone , or ChromaDB . Experience with dashboarding tools and visualization libraries (e.g., Streamlit, Plotly, Dash, or Tableau). Exposure to cloud platforms (AWS, GCP, or Azure), including usage of GPU instances and model hosting services. About ChemAnalyst : ChemAnalyst is a digital platform, which keeps a real-time eye on the chemicals and petrochemicals market fluctuations, thus, enabling its customers to make wise business decisions. With over 450 chemical products traded globally, we bring detailed market information and pricing data at your fingertips. Our real-time pricing and commentary updates enable users to stay acquainted with new commercial opportunities. Each day, we flash the major happenings around the globe in our news section. Our market analysis section takes it a step further, offering an in-depth evaluation of over 15 parameters including capacity, production, supply, demand gap, company share and among others. Our team of experts analyse the factors influencing the market and forecast the market data for up to the next 10 years. We are a trusted source of information for our international clients, ensuring user-friendly and customized deliveries on time. Website : https://www.chemanalyst.com/
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra
On-site
Job Information Date Opened 07/23/2025 Industry AEC Job Type Permanent Work Experience 3 - 5 Years City Mumbai State/Province Maharashtra Country India Zip/Postal Code 400093 About Us Axium Global (formerly XS CAD), established in 2002, is a UK-based MEP (M&E) and architectural design and BIM Information Technology Enabled Services (ITES) provider with an ISO 9001:2015 and ISO 27001:2022 certified Global Delivery Centre in Mumbai, India. With additional presence in the USA, Australia and UAE, our global reach allows us to provide services to customers with the added benefit of local knowledge and expertise. Axium Global is established as one of the leading pre-construction planning services companies in the UK and India, serving the building services (MEP), retail, homebuilder, architectural and construction sectors with high-quality MEP engineering design and BIM solutions. Job Description We are looking for a hands-on and visionary AI Lead to spearhead all AI initiatives within our organization. You will lead a focused team comprising 1 Data Scientist, 1 ML Engineer, and 1 Intern, while also being directly involved in designing and implementing AI solutions. The role involves identifying impactful AI use cases, conducting research, proposing tools and deploying AI models into production to enhance products, processes and user experiences. You will work across diverse domains such as NLP, computer vision, recommendation systems, predictive analytics and generative AI. The position also covers conversational AI, intelligent automation, and AI-assisted workflows for the AEC industry. A strong understanding of ethical and responsible AI practices is expected. Key Responsibilities: Lead AI research, tool evaluation and strategy aligned with business needs Build and deploy models for NLP, computer vision, generative AI, recommendation systems and time-series forecasting Guide the development of conversational AI, intelligent automation and design-specific AI tools Mentor and manage a small team of AI/ML professionals Collaborate with cross-functional teams to integrate AI into products and workflows. Ensure ethical use of AI and compliance with data governance standards. Oversee lifecycle of AI models from prototyping to deployment and monitoring. Qualifications and Experience Required: Educational Qualification: BE/BTech or ME/MTech degree in Computer Science, Data Science, Artificial Intelligence or related field Certifications in AI/ML, cloud AI platforms or responsible AI practices are a plus Technical Skills: 4–5 years of experience in AI/ML projects Strong programming skills in Python (must-have); R is a plus Experience with TensorFlow , PyTorch , Scikit-learn , OpenCV Familiarity with NLP tools like spaCy , NLTK and Hugging Face Transformers Backend integration using FastAPI or Flask Experience deploying models using Docker , Kubernetes and cloud services like AWS , GCP or Azure ML Use of MLflow , DVC for experiment tracking and model versioning Strong data handling with Pandas , NumPy , and visualization using Matplotlib , Seaborn Working knowledge of SQL , NoSQL and BI tools like Power BI or Tableau Preferred Exposure (Nice to Have): Familiarity with AEC , design workflows or other data-rich industries Experience collaborating with domain experts to frame and solve AI problems Leadership and Strategic Skills: Proven ability to lead small AI/ML teams. Strong communication and stakeholder management Familiarity with ethical AI principles and data privacy frameworks Ability to translate business problems into AI solutions and deliver results Compensation: The selected candidate will receive competitive compensation and remuneration policies in line with qualifications and experience. Compensation will not be a constraint for the right candidate. What We Offer: A fulfilling working environment that is respectful and ethical A stable and progressive career opportunity State-of-the-art office infrastructure with the latest hardware and software for professional growth In-house, internationally certified training division and innovation team focusing on training and learning the latest tools and trends. Culture of discussing and implementing a planned career growth path with team leaders Transparent fixed and variable compensation policies based on team and individual performances, ensuring a productive association.
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Summary:We are seeking a highly skilled and forward-thinking Senior Data Scientist to join our Automation Centre of Excellence within the Research & Analytics team. Expertise in Generative AI and Machine Learning. Adept at leading end-to-end development of high-performance GenAI/ML solutions that streamline complex business workflows and elevate analytical precision. This role demands deep expertise in Data Science, Generative AI (GenAI), Python programming and automation. The ideal candidate will lead the development of intelligent, scalable solutions that automate workflows, enhance decision-making, and unlock business value through advanced AI techniques. Awareness of Microsoft Power Platform is good to have.Roles and Responsibilities:Collaborate with cross-functional teams to identify automation opportunities and deliver AI-driven solutions.Design and implement end-to-end data science workflows using Python, integrating diverse data sources (on-premise and cloud).Lead the transformation of manual, Excel-based processes into robust, governed Python-based automation pipelines.Apply advanced data science techniques including data preprocessing, feature engineering, and model development.Leverage GenAI models (e.g., GPT, DALL·E, LLaMA) for content generation, data exploration, and intelligent automation.Build and deploy applications using Microsoft Power Platform (Power BI, Power Apps, and Power Automate).Integrate systems and automate workflows using WTW Unify.Ensure high standards of code quality, modularity, and scalability in Python development.Implement CI/CD pipelines using Azure DevOps for seamless deployment of data science solutions.Maintain data governance, quality, and compliance with organizational standards.Stay abreast of emerging trends in AI, GenAI, and data engineering to drive innovation.Technical Skills & Tools:Mandatory:Key Skills: Generative AI, Machine Learning, Deep Learning, NLPPython (Data Processing, Engineering, Automation)Libraries: Pandas, Numpy, Seaborn, Matplotlib, Scikit-learn, Tensorflow, Keras, OpenCV, NLTK, Spacy, Gensim, TextBlob, Fasttext, FastApiGenAI frameworks (e.g., OpenAI, Hugging Face, Meta AI, LangChain, LangGraph)Version Control & DevOps Tools: GitHub (CI/CD Actions), Azure DevOpsVersion control systems (ADO/Bitbucket)Preferred:R Programming, Posit Workbench, R ShinyKnowledge of Microsoft Power Platform, WTW UnifyFunctional Expertise:8+ years of experience in data science and GenAI projectsProven track record in deploying GenAI solutions in enterprise environments.Experience in the Insurance domain (e.g., Finance, Actuarial) is a plus.Strong communication skills to engage with technical and non-technical stakeholders. Qualifications Educational Qualifications:Master’s degree in Statistics, Mathematics, Economics, or Econometrics from Tier 1 institutionsOR BE/B-Tech, MCA, or MBA from Tier 1 institutions
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Technical Expertise Good understanding of the latest Python and Python Libraries In-depth knowledge and experience using Python, Django, Flask, React and Node Js Expertise in Python Libraries like Numpy, Pandas, Matplotlib, seaborn, Rasa, Rasa X etc. Expertise in Web services REST and SOAP Extensive knowledge in design, development and implementation of highly scalable and reliable micro services. Expert in front end technologies such as React, Angular, Java Script, HTML, CSS and bootstrap. Knowledge and experience in packaging and deploying containerized applications. Familiarity with DevOps tools such as docker, Kubernetes, git, Jenkins for CI/CD is required. Familiarity with build and development tools is expected. Familiarity with deploying the applications on Web servers/Application servers. Knowledge of Web Application Servers and MQ (Messaging Queue) technologies and Apache Kafka messaging. Experience with SQL and Non-SQL databases and ability to write complex SQL queries Experience in unit & integration testing Soft Skills Excellent written, verbal and communication skills Ability to write professional technical documentation. Able to actively follow progress and deliver information on a frequent basis (ref:hirist.tech)
Posted 1 week ago
4.0 - 9.0 years
12 - 20 Lacs
Bengaluru
Work from Office
About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Python Lead/Developer Experience: 4-10 Years Skill Set: Python, Flask, Pandas, NumPy, Matplotlib, Plotly , SQL Location: Pune, Hyderabad, Gurgaon Job Summary : Are you a seasoned software developer with 5 to 10 years of experience and a passion for building scalable backend systems? Were looking for someone just like you! Location: Bangalore Qualification: Bachelors degree in Engineering Availability: Immediate joiners preferred Key Responsibilities: Design and develop scalable Python applications using FastAPI Collaborate with cross-functional teams including front-end, data science, and DevOps Work with libraries like Pandas, NumPy, Scikit-learn for data-driven solutions Build and maintain robust backend APIs and database integrations Implement unit, integration, and end-to-end testing Contribute to architecture and design using best practices Mandatory Skills: Strong Python expertise with data libraries (Pandas, NumPy, Matplotlib, Plotly) Experience with FastAPI/FlaskAPI , SQL/NoSQL (MongoDB, Postgres, CRDB) Middleware orchestration (Mulesoft, BizTalk) CI/CD pipelines, RESTful APIs, OOP, and design patterns Desirable Skills: Familiarity with OpenAI tools (GitHub Copilot, ChatGPT API) Experience with Azure , Big Data , Kafka/RabbitMQ , Docker/Kubernetes Exposure to distributed and high-volume backend systems If interested, then Please share your updated resume with below details at pragati.jha@gspann.com LinkedIn Profile Link: Position Applied For: Full Name: Contact Number: Email ID: Total Experience: Relevant Experience: Current Company: Current Salary: Expected Salary: Notice Period: Last Working Day (LWD): Any Offers in Hand? Current Location: Preferred Location: Comfortable for 5 days a Week? Are you comfortable coming for Face to Face for client round after initial online rounds? Skills and Rating (Scale of 5): Data Engineer: Python: Flask: Cloud (which cloud): SQL: Any other technologies: Please confirm Interview Availability: Time slot pls confirm Once we have these details, we can move forward with the next steps.
Posted 1 week ago
1.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Location : Vadodara Type : Full-time / Internship Duration (for interns) : Minimum 3 months Stipend/CTC : Based on experience and role About Gururo Gururo is a global leader in practical, career-transforming education. With a mission to equip professionals and students with real-world skills, we specialize in project management, leadership, business analytics, and emerging technologies. Join our dynamic, impact-driven team to work on data-centric products that empower decision-making and transform education at scale. Who Can Apply? Interns: Final-year students or recent graduates in Computer Science, Statistics, Mathematics, Data Science, or related fields with a passion for data storytelling. Freshers: 0–1 years of experience with academic projects or internship exposure to data analysis. Experienced Professionals: 1+ years of hands-on experience in analytics or data roles, with a portfolio or GitHub/Power BI/Tableau profile showcasing analytical thinking. Key Responsibilities Collect, clean, and analyze data from multiple sources to uncover trends and actionable insights Build dashboards and reports using tools like Excel, Power BI, or Tableau Translate business problems into analytical solutions through exploratory data analysis (EDA) Support A/B testing, cohort analysis, and customer segmentation for business decision-making Use SQL to query and manipulate structured data from relational databases Assist in building data pipelines and automation for recurring analytics tasks Communicate findings effectively through visualizations and presentations Collaborate closely with product, marketing, and engineering teams to support data-driven strategies Must-Have Skills Strong proficiency in Excel and SQL Basic to intermediate knowledge of Python for data analysis (Pandas, NumPy) Familiarity with data visualization tools like Power BI, Tableau, or Matplotlib/Seaborn Good understanding of descriptive statistics, data cleaning, and EDA Ability to communicate insights clearly to technical and non-technical stakeholders Experience with Google Analytics, TruConversion, or similar analytics platforms (optional for interns) Good to Have (Optional) Experience with data storytelling, dashboarding, and automation scripts Exposure to R, Looker, or Metabase Familiarity with web/app analytics, conversion funnels, and retention metrics Understanding of data warehousing concepts and basic ETL pipelines GitHub portfolio, published reports, or participation in data hackathons What You’ll Gain Real-world experience in data analytics and business intelligence Mentorship from senior analysts and cross-functional collaboration exposure Certificate of Internship/Experience & Letter of Recommendation (for interns) Opportunity to work on data for educational product growth and user behavior insights Flexible working hours and performance-linked growth Hands-on end-to-end data projects — from raw data to executive insights
Posted 1 week ago
4.0 years
25 - 42 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 2500000 - Rs 4200000 (ie INR 25-42 LPA) Min Experience: 4 years Location: Bengaluru JobType: full-time We are looking for a highly skilled and motivated Lead Data Scientist to join our growing data team. The ideal candidate will bring strong expertise in machine learning , data science using Python , and end-to-end model development to lead high-impact projects that drive business insights and innovation. You will work closely with cross-functional teams including engineering, product, and business stakeholders to identify opportunities, formulate data-driven solutions, and deploy models at scale. Requirements Key Responsibilities: Lead and own the entire lifecycle of data science projects — from problem definition and data exploration to model development, validation, and deployment. Develop, implement, and optimize machine learning models using Python-based data science libraries and frameworks (e.g., Pandas, Scikit-learn, NumPy, TensorFlow, PyTorch). Collaborate with engineering teams to deploy models into production systems and monitor their performance post-deployment. Conduct exploratory data analysis (EDA) to derive insights, identify patterns, and uncover business opportunities. Work with large and complex datasets using efficient data processing techniques, ensuring data quality and integrity throughout the pipeline. Lead data-driven initiatives and mentor junior data scientists on technical and strategic project execution. Present results and insights to business stakeholders in a clear and impactful manner, driving actionable recommendations. Continuously research and evaluate new tools, technologies, and methodologies to improve the team's capabilities and outcomes. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field. 4-9 years of hands-on experience in data science, machine learning, and Python for data science applications. Strong proficiency in Python and its data science stack — Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn, etc. Solid understanding of machine learning algorithms including supervised, unsupervised, and ensemble methods. Experience building and deploying ML models in production environments. Good knowledge of data wrangling, feature engineering, model tuning, and evaluation techniques. Familiarity with version control systems (e.g., Git), cloud platforms (e.g., AWS, GCP, or Azure), and ML Ops best practices is a plus. Excellent problem-solving skills and ability to break down complex problems into actionable solutions. Strong communication and collaboration skills, with the ability to lead technical discussions and influence stakeholders. Preferred Skills: Experience with deep learning frameworks such as TensorFlow or PyTorch. Familiarity with distributed data processing (e.g., Spark or Dask). Exposure to business intelligence tools and data visualization platforms like Power BI or Tableau
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC , you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Job Description & Summary – Associate – GenAI – Mumbai Responsibilities: Role : Associate Exp : 3—5 Years Location: Mumbai Job Description: Candidate with 3-5 years of exp and a strong background in machine learning, technical expertise, and domain knowledge in Banking, Financial Services, and Insurance (BFSI). Experience with Generative AI (GenAI) is a must have. Key Responsibilities: · Collaborate with clients to understand their business needs and provide data-driven solutions. · Develop and implement machine learning models to solve complex business problems. · Analyze large datasets to extract actionable insights and drive decision-making. · Present findings and recommendations to stakeholders in a clear and concise manner. · Stay updated with the latest trends and advancements in data science and machine learning. GenAI Experience: · Generative AI (GenAI) experience, including working with models like GPT, BERT, and other transformer-based architectures · Ability to leverage GenAI for tasks such as text generation, summarization, and conversational AI · Experience in developing and deploying GenAI solutions to enhance business processes and customer experiences Technical Skills: · Programming Languages: Proficiency in Python, R, and SQL for data manipulation, analysis, and model development. · Machine Learning Frameworks: Extensive experience with TensorFlow, PyTorch, and Scikit-learn for building and deploying models. · Data Visualization Tools: Strong knowledge of Tableau, Power BI, and Matplotlib to create insightful visualizations. · Cloud Platforms: Expertise in AWS, Azure, and Google Cloud for scalable and efficient data solutions. · Database Management: Proficiency in SQL and NoSQL databases for data storage and retrieval. · Version Control: Experience with Git for collaborative development and code management. · APIs and Web Services: Ability to integrate and utilize APIs for data access and model deployment. Machine Learning algorithms: · Supervised and Unsupervised Learning · Regression Analysis · Classification Techniques · Clustering Algorithms · Natural Language Processing (NLP) · Time Series Analysis · Deep Learning · Reinforcement Learning Mandatory skill sets: GenAI Preferred skill sets: GenAI Years of experience required: 3—5 Years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Extract Transform Load (ETL), Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
1.0 - 3.0 years
3 - 10 Lacs
Calcutta
Remote
Job Title: Data Scientist / MLOps Engineer (Python, PostgreSQL, MSSQL) Location: Kolkata (Must) Employment Type: Full-Time Experience Level: 1–3 Years About Us: We are seeking a highly motivated and technically strong Data Scientist / MLOps Engineer to join our growing AI & ML team. This role involves the design, development, and deployment of scalable machine learning solutions, with a strong focus on operational excellence, data engineering, and GenAI integration. Key Responsibilities: Build and maintain scalable machine learning pipelines using Python. Deploy and monitor models using MLFlow and MLOps stacks. Design and implement data workflows using standard python libraries such as PySpark. Leverage standard data science libraries (scikit-learn, pandas, numpy, matplotlib, etc.) for model development and evaluation. Work with GenAI technologies, including Azure OpenAI and other open source models, for innovative ML applications. Collaborate closely with cross-functional teams to meet business objectives. Handle multiple ML projects simultaneously with robust branching expertise. Must-Have Qualifications: Expertise in Python for data science and backend development. Solid experience with PostgreSQL and MSSQL databases. Hands-on experience with standard data science packages such as Scikit-Learn, Pandas, Numpy, Matplotlib. Experience working with Databricks , MLFlow , and Azure . Strong understanding of MLOps frameworks and deployment automation. Prior exposure to FastAPI and GenAI tools like Langchain or Azure OpenAI is a big plus. Preferred Qualifications: Experience in the Finance, Legal or Regulatory domain. Working knowledge of clustering algorithms and forecasting techniques. Previous experience in developing reusable AI frameworks or productized ML solutions. Education: B.Tech in Computer Science, Data Science, Mechanical Engineering, or a related field. Why Join Us? Work on cutting-edge ML and GenAI projects. Be part of a collaborative and forward-thinking team. Opportunity for rapid growth and technical leadership. Job Type: Full-time Pay: ₹344,590.33 - ₹1,050,111.38 per year Benefits: Leave encashment Paid sick time Paid time off Provident Fund Work from home Education: Bachelor's (Required) Experience: Python: 3 years (Required) ML: 2 years (Required) Location: Kolkata, West Bengal (Required) Work Location: In person Application Deadline: 02/08/2025 Expected Start Date: 04/08/2025
Posted 1 week ago
5.0 - 7.0 years
10 - 15 Lacs
Chennai
Work from Office
Role & responsibilities We are seeking an experienced Django Python API Developer who combines hands-on technical expertise with strong leadership and mentoring capabilities. You will lead and develop a team of engineers, take ownership of end-to-end delivery, and ensure high-quality, scalable solutions in compliance with industry and government standards. Key Responsibilities - Lead API development with Python, Django, and Django REST Framework. - Architect scalable backend services; design and enforce API standards (OpenAPI/Swagger). - Implement and oversee deployment pipelines (Jenkins, GitHub Actions) and container orchestration (Docker, Kubernetes). - Provision and manage infrastructure using Terraform, CloudFormation, or ARM templates. - Mentor backend team: code reviews, pair programming, and technical workshops. - Collaborate with frontend leads, QA, and operations to ensure end-to-end delivery. Required Technical Skills - Python 3.7+, Django, Django REST Framework. - PostgreSQL, MySQL, or MongoDB performance tuning. - CI/CD (Jenkins, GitHub Actions, Azure DevOps); Docker, Kubernetes; Terraform or CloudFormation. - Security and compliance (OWASP, GIGW guidelines). Preferred candidate profile Preferred Experience - Experience leading e-Governance or public-sector initiatives. - Working knowledge of message brokers (RabbitMQ, Kafka) and serverless architectures. - Cloud deployments and observability solutions. Soft Skills & Attributes - Leadership & Mentorship: Demonstrated ability to lead and develop backend teams. - Hands-On Approach: Deep involvement in coding, architecture, and deployment. - Ownership & Accountability: Ensures reliable, high-quality API services. - Communication: Articulates technical vision and aligns stakeholders
Posted 1 week ago
3.0 - 5.0 years
3 - 5 Lacs
Ahmedabad
Work from Office
We are seeking a skilled Python Developer to join our team. The ideal candidate will be responsible for working with existing APIs or developing new APIs based on our requirements. You should have a strong foundation in Python and experience with RESTful services and cloud infrastructure. Requirements: Strong understanding of Python Experience with RESTful services and cloud infrastructure Ability to develop microservices/functions Familiarity with libraries such as Pandas, NumPy, Matplotlib & Seaborn, Scikit-learn, Flask , Django, Requests, FastAPI and TensorFlow & PyTorch. Basic understanding of SQL and databases Ability to write clean, maintainable code Experience deploying applications at scale in production environments Experience with web scraping using tools like BeautifulSoup, Scrapy, or Selenium Knowledge of equities, futures, or options microstructures is a plus Experience with data visualization and dashboard building is a plus Why Join Us? Opportunity to work on high-impact real-world projects Exposure to cutting-edge technologies and financial datasets A collaborative, supportive, and learning-focused team culture 5-day work week (Monday to Friday)
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams
Posted 1 week ago
1.0 years
0 Lacs
India
Remote
Location: Remote (India) Type: Full-Time Experience: 0–1 year Industry: Artificial Intelligence / Data Science / Tech About Us: We’re a fast-moving, AI-native startup building scalable intelligent systems that harness the power of data to solve real-world problems. From LLMs to predictive modeling, we thrive on transforming raw information into actionable intelligence using the latest in machine learning and data automation. Role Overview: We’re hiring a Junior Data Analyst to join our remote team. You'll collaborate with data scientists, ML engineers, and product teams to clean, analyze, and structure datasets that power next-gen AI products. If you love data, patterns, and productivity hacks using AI tools — this is your chance to break into the AI industry. Key Responsibilities: Clean, preprocess, and organize large volumes of structured and unstructured data Conduct exploratory data analysis (EDA) to uncover trends, patterns, and insights Support feature engineering and contribute to AI/ML model preparation Develop dashboards, reports, and visualizations using Power BI, Tableau, Seaborn, etc. Use tools like Python, SQL, Excel , and AI assistants to streamline repetitive tasks Collaborate cross-functionally to support data-driven decision-making Maintain documentation for data workflows and ensure data integrity Tech Stack & Tools You'll Work With: Languages: Python (Pandas, NumPy), SQL, R (optional) Data Tools: Jupyter, Excel/Google Sheets, BigQuery, Snowflake (optional) Visualization: Power BI, Tableau, matplotlib, Seaborn, Plotly Productivity + AI Tools: Gemini CLI , Claude Code , Cursor , ChatGPT Code Interpreter , etc. Project Tools: GitHub, Notion, Slack, Jira You’re a Great Fit If You Have: A strong analytical mindset and curiosity for patterns in data Solid foundation in data cleaning, transformation, and EDA Basic understanding of databases and querying using SQL Familiarity with at least one scripting language (preferably Python) Interest in AI and how data powers intelligent systems Bonus: Experience with AI programming assistants or interest in using them Requirements: Bachelor’s degree in Data Science, Statistics, Computer Science, Mathematics , or related field 0–1 year of professional experience in a data-focused role Strong communication skills and ability to work independently in a remote setup Based in India with reliable internet access and a good work-from-home environment What You’ll Get: Work at the intersection of data analytics and AI Remote-first flexibility and asynchronous work culture Mentorship from experienced data scientists and AI engineers Exposure to real-world AI projects and best practices Access to premium AI productivity tools and training resources Opportunity to grow into senior analytics or data science roles
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title : Data Science Trainer Company : LMES Academy Private Limited Location : Pallavaram, Chennai (online) Experience : 6+ Years in Data Science with Python Employment Type : Part-Time (2 Days/Week + Doubt Clearing Sessions) About Us LMES Academy is a leading educational platform dedicated to empowering students with industry-relevant skills through innovative and practical learning experiences. We're on a mission to bridge the gap between academic knowledge and real-world applications. Job Description We are seeking an experienced Data Science Trainer with deep expertise in Python and applied data science techniques. The ideal candidate will have a passion for teaching and mentoring, with the ability to simplify complex concepts for learners. Roles & Responsibilities Deliver interactive training sessions on Data Science and Python on any 2 weekdays (Monday to Friday) Conduct doubt clarification sessions twice a week , ensuring students grasp concepts effectively Develop training content, real-world case studies, and project-based learning materials Guide students in understanding core concepts such as: Data wrangling and preprocessing Exploratory data analysis Statistical modeling and machine learning Python libraries (Pandas, NumPy, Scikit-learn, Matplotlib, etc.) Evaluate student progress and provide constructive feedback Stay updated with latest trends in Data Science & AI Requirements Minimum 6 years of hands-on experience in Data Science and Python Strong knowledge of machine learning algorithms , data visualization , and model evaluation techniques Prior experience in teaching/training (preferred but not mandatory) Excellent communication and presentation skills Passion for mentoring and student success
Posted 1 week ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary: We are seeking a proactive and detail-oriented Data Scientist to join our team and contribute to the development of intelligent AI-driven production scheduling solutions. This role is ideal for candidates passionate about applying machine learning, optimization techniques, and operational data analysis to enhance decision-making and drive efficiency in manufacturing or process industries. You will play a key role in designing, developing, and deploying smart scheduling algorithms integrated with real-world constraints like machine availability, workforce planning, shift cycles, material flow, and due dates. Experience: 1 Year Responsibilities: 1. AI-Based Scheduling Algorithm Development Develop and refine scheduling models using: Constraint Programming Mixed Integer Programming (MIP) Metaheuristic Algorithms (e.g., Genetic Algorithm, Ant Colony, Simulated Annealing) Reinforcement Learning or Deep Q-Learning Translate shop floor constraints (machines, manpower, sequence dependencies, changeovers) into mathematical models. Create simulation environments to test scheduling models under different scenarios. 2. Data Exploration & Feature Engineering Analyze structured and semi-structured production data from MES, SCADA, ERP, and other sources. Build pipelines for data preprocessing, normalization, and handling missing values. Perform feature engineering to capture important relationships like setup times, cycle duration, and bottlenecks. 3. Model Validation & Deployment Use statistical metrics and domain KPIs (e.g., throughput, utilization, makespan, WIP) to validate scheduling outcomes. Deploy solutions using APIs, dashboards (Streamlit, Dash), or via integration with existing production systems. Support ongoing maintenance, updates, and performance tuning of deployed models. 4. Collaboration & Stakeholder Engagement Work closely with production managers, planners, and domain experts to understand real-world constraints and validate model results. Document solution approaches, model assumptions, and provide technical training to stakeholders. Qualifications: Bachelor’s or Master’s degree in: Data Science, Computer Science, Industrial Engineering, Operations Research, Applied Mathematics, or equivalent. Minimum 1 year of experience in data science roles with exposure to: AI/ML pipelines, predictive modelling, Optimization techniques or industrial scheduling Proficiency in Python, especially with: pandas, numpy, scikit-learn ortools, pulp, cvxpy or other optimization libraries, matplotlib, plotly for visualization Solid understanding of: Production planning & control processes (dispatching rules, job-shop scheduling, etc.), Machine Learning fundamentals (regression, classification, clustering) Familiarity with version control (Git), Jupyter/VSCode environments, and CI/CD principles Preferred (Nice-to-Have) Skills: Experience with: Time-series analysis, sensor data, or anomaly detection, Manufacturing execution systems (MES), SCADA, PLC logs, or OPC UA data, Simulation tools (SimPy, Arena, FlexSim) or digital twin technologies Exposure to containerization (Docker) and model deployment (FastAPI, Flask) Understanding of lean manufacturing principles, Theory of Constraints, or Six Sigma Soft Skills: Strong problem-solving mindset with ability to balance technical depth and business context. Excellent communication and storytelling skills to convey insights to both technical and non-technical stakeholders. Eagerness to learn new tools, technologies, and domain knowledge.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
The Data Visualization Engineer position at Zoetis India Capability Center (ZICC) in Hyderabad offers a unique opportunity to be part of a team that drives transformative advancements in animal healthcare. As a key member of the pharmaceutical R&D team, you will play a crucial role in creating insightful and interactive visualizations to support decision-making in drug discovery, development, and clinical research. Your responsibilities will include designing and developing a variety of visualizations, from interactive dashboards to static visual representations, to summarize key insights from high-throughput screening and clinical trial data. Collaborating closely with cross-functional teams, you will translate complex scientific data into clear visual narratives tailored to technical and non-technical audiences. In this role, you will also be responsible for maintaining and optimizing visualization tools, ensuring alignment with pharmaceutical R&D standards and compliance requirements. Staying updated on the latest trends in visualization technology, you will apply advanced techniques like 3D molecular visualization and predictive modeling visuals to enhance data representation. Working with various stakeholders such as data scientists, bioinformaticians, and clinical researchers, you will integrate, clean, and structure datasets for visualization purposes. Your role will also involve collaborating with Zoetis Tech & Digital teams to ensure seamless integration of IT solutions and alignment with organizational objectives. To excel in this position, you should have a Bachelor's or Master's degree in Computer Science, Data Science, Bioinformatics, or a related field. Experience in the pharmaceutical or biotech sectors will be a strong advantage. Proficiency in visualization tools such as Tableau, Power BI, and programming languages like Python, R, or JavaScript is essential. Additionally, familiarity with data handling tools, omics and network visualization platforms, and dashboarding tools will be beneficial. Soft skills such as strong storytelling ability, effective communication, collaboration with interdisciplinary teams, and analytical thinking are crucial for success in this role. Travel requirements for this full-time position are minimal, ranging from 0-10%. Join us at Zoetis and be part of our journey to pioneer innovation and drive the future of animal healthcare through impactful data visualization.,
Posted 1 week ago
0.0 years
0 Lacs
Pune, Maharashtra
On-site
Who are we: Fulcrum Digital is an agile and next-generation digital accelerating company providing digital transformation and technology services right from ideation to implementation. These services have applicability across a variety of industries, including banking & financial services, insurance, retail, higher education, food, healthcare, and manufacturing. The Role: Splunk Engineering: Design and implement Splunk dashboards and alerts for real-time monitoring. Write and optimize complex SPL queries to extract actionable insights from logs and metrics. Data Science & Forecasting: Analyze historical operational data to identify trends and anomalies. Develop predictive models for time-based forecasting using Python Visualize trends and forecasts using libraries like Matplotlib, Seaborn, or Plotly. Automation & DevOps: Integrate data science workflows with Jenkins for automated execution of scripts. Write and maintain Shell scripts for automation and system orchestration. Cloud & Infrastructure: Work with AWS or PCF to deploy and manage data pipelines and monitoring tools. Leverage cloud-native services for data storage, processing, and visualization. Requirements Skills – Must Have: Linux Shell Scripting, Python ITIL / ITSM Application Troubleshooting Monitoring tool - Splunk, Dynatrace Jenkins - CI/CD Experience working with AWS Cloud or Pivotal Cloud Foundry (PCF). Good To Have: Payments Flows, Switching, Settlements, Authorisation flows. Even Framework architecture Ansible/Chef (Basic) Job Opening ID RRF_5567 Job Type Full time Industry IT Services Date Opened 22/07/2025 City Pune Province Maharashtra Country India Postal Code 411057
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
jaipur, rajasthan
On-site
Amplework Software is a full-stack development agency based in Jaipur (Rajasthan), IND, specializing in end-to-end software development solutions for clients worldwide. We are dedicated to delivering high-quality products that align with business requirements and leverage cutting-edge technologies. Our expertise encompasses custom software development, mobile applications, AI-driven solutions, and enterprise applications. Join our innovative team that drives digital transformation through technology. We are looking for a Mid-Level Python and AI Engineer to join our team. In this role, you will assist in building and training machine learning models using frameworks such as TensorFlow, PyTorch, and Scikit-Learn. You will experiment with pre-trained AI models for NLP, Computer Vision, and Predictive Analytics. Additionally, you will work with structured and unstructured data, collaborate with data scientists and software engineers, and continuously learn, experiment, and optimize models to enhance performance and efficiency. Ideal candidates should possess a Bachelor's degree in Computer Science, Engineering, AI, or a related field and proficiency in Python with experience in writing optimized and clean code. Strong problem-solving skills, understanding of machine learning concepts, and experience with data processing libraries are required. Familiarity with AI models and neural networks using frameworks like Scikit-Learn, TensorFlow, or PyTorch is essential. Preferred qualifications include experience with NLP using transformers, BERT, GPT, or OpenAI APIs, AI model deployment, database querying, and participation in AI-related competitions or projects. Soft skills such as analytical thinking, teamwork, eagerness to learn, and excellent English communication skills are highly valued. Candidates who excel in problem-solving, possess a willingness to adapt and experiment, and prefer a dynamic environment for AI exploration are encouraged to apply. A face-to-face interview will be conducted, and applicants should be able to attend the interview at our office location. Join the Amplework Software team to collaborate with passionate individuals, work on cutting-edge projects, make a real impact, enjoy competitive benefits, and thrive in a great working environment.,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
kolkata, west bengal
On-site
As a Data Scientist at UST, you will independently develop data-driven solutions to tackle complex business challenges by utilizing analytical, statistical, and programming skills to collect, analyze, and interpret large datasets under supervision. Your role involves working closely with stakeholders across the organization to identify opportunities for leveraging customer data in creating models that generate valuable business insights. You will be responsible for creating new experimental frameworks, building automated tools for data collection, correlating similar datasets, and deriving actionable results. Additionally, you will develop predictive models and machine learning algorithms to analyze vast amounts of information, uncover trends, and patterns. By mining and analyzing data from company databases, you will drive optimization and improvement in product development, marketing techniques, and business strategies. Moreover, you will be expected to create processes and tools to monitor and analyze model performance, data accuracy, and develop data visualization and illustrations to address business problems effectively. Your role will also involve using predictive modeling to enhance and optimize customer experiences and other business outcomes. Collaboration with different functional teams to implement models and monitor outcomes is essential. You will be required to set and provide feedback on FAST goals for reportees. Key responsibilities include applying statistical techniques like regression, properties of distributions, and statistical tests to analyze data, as well as utilizing machine learning techniques such as clustering, decision tree learning, and artificial neural networks for data analysis. Moreover, you will be creating advanced algorithms and statistics through regression, simulation, scenario analysis, and modeling. In terms of data visualization, you will visualize and present data for stakeholders using tools like Periscope, Business Objects, D3, ggplot, etc. You will oversee the activities of analyst personnel, ensuring efficient execution of their duties, while mining the business's database for critical insights and communicating findings to relevant departments. Other expectations include creating efficient and reusable code for data improvement, manipulation, and analysis, managing project codebase through version control tools like git and bitbucket, and creating reports depicting trends and behaviors from analyzed data. You will also be involved in training end users on new reports and dashboards, documenting your work, conducting peer reviews, managing knowledge, and reporting task status. The ideal candidate will possess excellent pattern recognition and predictive modeling skills, a strong background in data mining and statistical analysis, expertise in machine learning techniques, and advanced algorithm creation. Moreover, analytical, communication, critical thinking, attention to detail, mathematical, and interpersonal skills are crucial for success in this role. Proficiency in programming languages such as Java, Python, R, web services like Redshift and S3, statistical and data mining techniques, computing tools, analytical languages, data visualization software, mathematics, spreadsheet tools, DBMS, operating systems, and project management tools is required. Additionally, a strong understanding of statistical concepts, SQL, machine learning (Regression and Classification), deep learning (ANN, RNN, CNN), advanced NLP, computer vision, Gen AI/LLM, AWS Sagemaker/Azure ML/Google Vertex AI, and basic implementation experience of Docker, Kubernetes, kubeflow, MLOps, Python (numpy, panda, sklearn, streamlit, matplotlib, seaborn) is essential for this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As a Python Developer, you will be responsible for building supervised (GLM ensemble techniques) and unsupervised (clustering) models using standard industry libraries such as pandas, scikit-learn, and keras. Your expertise in big data technologies like Spark and Dask, as well as databases including SQL and NoSQL, will be essential in this role. You should have significant experience in Python, with a focus on writing unit tests, creating packages, and developing reusable and maintainable code. Your ability to comprehend and articulate modeling techniques, along with visualizing analytical results using tools like matplotlib, seaborn, plotly, D3, and Tableau, will be crucial. Experience with continuous integration and development tools like Jenkins, as well as Spark ML pipelines, will be advantageous. We are looking for a self-motivated individual who can collaborate effectively with colleagues and contribute innovative ideas to enhance our projects. Preferred qualifications include an advanced degree with a strong foundation in the mathematics behind machine learning, including linear algebra and multivariate calculus. Experience in specialist areas such as reinforcement learning, NLP, Bayesian techniques, or generative models will be a plus. You should excel at presenting ideas and analytical findings in a compelling manner that influences decision-making. Demonstrated evidence of implementing analytical solutions in an industry context and a genuine enthusiasm for leveraging data science to enhance customer-centricity in financial services ethically are highly desirable qualities for this role.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
kolkata, west bengal
On-site
You must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure Data Factory, and PostgreSQL. Working knowledge in Azure DevOps and Git flow would be an added advantage. Alternatively, you should have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, and AWS RedShift. Demonstrable expertise in working with timeseries data is essential. Experience in delivering data engineering/data science projects in Industry 4.0 is an added advantage. Knowledge of Palantir is required. You must possess strong problem-solving skills with a focus on sustainable and reusable development. Proficiency in using statistical computer languages like Python/PySpark, Pandas, Numpy, seaborn/matplotlib is necessary. Knowledge in Streamlit.io is a plus. Familiarity with Scala, GoLang, Java, and big data tools such as Hadoop, Spark, Kafka is beneficial. Experience with relational databases like Microsoft SQL Server, MySQL, PostGreSQL, Oracle, and NoSQL databases including Hadoop, Cassandra, MongoDB is expected. Proficiency in data pipeline and workflow management tools like Azkaban, Luigi, Airflow is required. Experience in building and optimizing big data pipelines, architectures, and data sets is crucial. You should possess strong analytical skills related to working with unstructured datasets. Provide innovative solutions to data engineering problems, document technology choices, and integration patterns. Apply best practices for project delivery with clean code. Demonstrate innovation and proactiveness in meeting project requirements. Reporting to: Director- Intelligent Insights and Data Strategy Travel: Must be willing to be deployed at client locations worldwide for long and short terms, flexible for shorter durations within India and abroad.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough