Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are seeking a highly skilled and innovative Data Scientist to join our team and drive data-centric initiatives by leveraging AI/ML models , Big Data technologies , and cloud platforms like AWS . The ideal candidate will be proficient in Python , experienced in designing end-to-end machine learning pipelines, and comfortable working with large-scale data systems. Key Responsibilities: Design, develop, and deploy machine learning models and AI-based solutions for business problems. Build robust ETL pipelines to process structured and unstructured data using tools like PySpark , Airflow , or Glue . Work with AWS cloud services (e.g., S3, Lambda, SageMaker, Redshift, EMR) to build scalable data science solutions. Perform exploratory data analysis (EDA) and statistical modeling to uncover actionable insights. Collaborate with data engineers, product managers, and stakeholders to identify use cases and deliver impactful data-driven solutions. Optimize model performance and ensure model explainability, fairness, and reproducibility. Maintain and improve existing data science solutions through MLOps practices (e.g., model monitoring, retraining, CI/CD for ML). Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Statistics, Data Science, or related field. 3+ years of experience in data science or machine learning roles. Strong programming skills in Python and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch . Show more Show less
Posted 5 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position: AI Intern Location: On-site – Noida Duration: 6 months Stipend: INR 15000 per month Open positions: 2 About the Role: We’re looking for an enthusiastic and curious AI Intern to join our team and support real-world applications of artificial intelligence and machine learning. You’ll work on practical tasks like model training, data preprocessing, and automation — all under the guidance of experienced engineers. This is a hands-on learning opportunity designed for someone who wants to explore AI beyond textbooks and build something meaningful. Key Responsibilities: Assist in data collection, cleaning, labeling, and preparation for model training Support basic model development or improvement (e.g., regression, classification, recommendation) Help integrate ML models with applications through simple APIs or scripts Run tests and evaluations to track model performance and suggest improvements Document workflows, results, and observations clearly Stay updated with relevant AI/ML trends and tools Ideal Candidate Should Have: Currently pursuing/completed a degree in Computer Science, Engineering, Mathematics, or a related field Basic understanding of Python and libraries like Pandas, NumPy, Scikit-learn, or TensorFlow/PyTorch Familiarity with concepts like data preprocessing, supervised learning, or NLP Logical thinking, attention to detail, and a willingness to learn Good communication skills and ability to work in a team What You’ll Gain: Real-world experience with production-level AI workflows Mentorship from experienced developers and data scientists Exposure to tools like Git, Jupyter, cloud platforms (if applicable) Opportunity to contribute to live projects or POCs Internship certificate and possible full-time opportunity based on performance To Apply: Apply on the job post listed on LinkedIn. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Customer Success at Innovaccer Our mission is to turn our customers into tech-savvy superheroes, ensuring they achieve success using our platform to meet their organization's business goals. If you're passionate about helping customers realize the value they seek with technology, then our customer success team is the right place for you. A Day in the Life Implement, develop, automate, and unit-test business processes between various data repositories and systems. Work with the Delivery team to help them integrate the Python code into their workflows and automate the entire data journey. Implement Python libraries to automate the data ingestion lifecycle and improve code reusability Troubleshoot data issues and perform root cause analysis to resolve operational issues proactively Establish best practices around software development in the development team. Developing programs to consume externally hosted open APIs Analyze and improve the performance, scalability, stability, and security of the code. Improve engineering standards, tooling, and processes Participate in the full SDLC process using Agile methodology, including discovery, inception, story and task creation, breakdown and estimation, iterative planning, development and unit testing, and release/deployment. Support production environments with any bugs or execution failures Work with business leaders and customers to understand their pain points and build large-scale solutions for them Managing and coordinating with multiple teams across Innovaccer to deliver solutions What You Need Hands-on experience in Python (OOPs development), SQL(JOINs, Aggregation, Filtration, Subquery, Grouping, Windows Function, Common Table Expressions) and Linux. Working knowledge of Python libraries like Pandas, NumPy, Selenium, ElasticSearch, psycopg2, Snowflake, Boto3, requests, urllib3, sqlalchemy and pymongo Experience in scaling Python codes for multiple integration touch points and consuming open APIs using Python Experience in RDBMS & NoSQL databases Experience with Git Good experience in current transformation technologies such as XML, JSON, CSV and SQL Strong knowledge of agile methodologies Basic Analytics Tools (Power BI or Tableau) knowledge An ambitious person who can work in a flexible startup environment with only one thing in mind - getting things done Excellent written and verbal communication skills Here's What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. Show more Show less
Posted 5 days ago
0 years
0 Lacs
India
On-site
Job Title "Applied Data Scientist at Poiro, based out of Bengaluru - On-site Role" Company Details The best way to predict the future is to invent it. And the best way to invent the future is to get the best minds to work on an idea whose time has come. Poiro (poiro.ai) builds AI systems and agents to supercharge marketing workflows and bring brands closer to consumers. Poiro’s AI systems can be trained on both structured and unstructured marketing data for a brand - from social media content and e-commerce marketplace data to 1st-party customer data - to build a comprehensive knowledge representation of a brand and its category. On top of that, Poiro’s AI agents seamlessly perform data analytics & data science workflows to generate actionable insights and guide marketing execution. Leading brands are using Poiro to - ● Identify content whitespaces, through analysis of social content across their category, and generate highly engaging organic & ad content ● Get tailored creator recommendations that maximize ROI for a particular product and content brief ● Comprehensively audit a creator's on & off-platform behavior to safeguard themselves from commercial and reputation risks And for many more use cases! Poiro is a subsidiary of Evam Labs (www.evamlabs.ai) - a Singapore headquartered holding company with offices in Bangalore and San Jose, building the next generation of high impact AI powered Enterprise Solutions. From Asia, for the World. Evam Labs was founded by ex-founders, academics and investors with over a decade of experience in building data & AI products and scaling companies from 0 to IPO. The founders are IIT/IIM/CMU alumni and have cumulatively raised over $500M+, invested in 30+ startups and hold 20+ patents. The rest of the Evam team comprises alumni of IITs, IIMs, CMU, IISc, NUS with rich experience across multiple industries. Job Roles & Responsibilities - Develop and implement data-driven models using Python, TensorFlow, Large Language Models (LLM), and Scikit-learn to enhance content monetization for creators. - Collaborate with cross-functional teams to analyze and interpret large datasets using NumPy, Pandas, and PySpark. - Design and optimize machine learning algorithms and solutions to improve user engagement and revenue potential. - Explore and integrate AI technologies to support and automate creator monetization pathways. - Monitor model performance and iteratively refine based on business and technical feedback. - Stay updated with the latest advancements in AI and data science to apply innovative solutions at Poiro. Cultural Expectations - Collaborate openly with team members to enhance AI-driven creator content tools - Embrace innovation, continuously exploring and adapting to cutting-edge AI technologies - Respect diverse ideas, fostering a creative and inclusive workplace - Display a proactive mindset in problem-solving and process improvements - Communicate clearly and effectively, contributing to a positive team dynamic - Uphold accountability, meeting project deadlines with precision and reliability Hiring Process Profile Shortlisting Theory/problem-solving and coding/hands-on Resume/experience-based discussion Cultural fit with founders Show more Show less
Posted 5 days ago
0 years
0 Lacs
India
Remote
Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less
Posted 5 days ago
0.0 - 1.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Call: 7972240453 (Mon to Sat / 11 am - 6 pm) Company Name: Greamio Technologies Private Limited Job Title: Data Science Trainer Location: Indore, Madhya Pradesh Salary: ₹28,000 - ₹32,000 Employment Type: Full-Time Job Description: We are looking for a highly motivated and skilled Data Science Trainer to join our team. The ideal candidate will have a passion for teaching and a deep understanding of data science concepts, tools, and methodologies. The trainer will be responsible for delivering high-quality training to students, helping them build a solid foundation in data science, and guiding them through practical projects. Key Responsibilities: Deliver engaging and interactive training sessions on various data science topics, including statistics, machine learning, data visualization, and programming ( Python ). Design and develop course materials, assignments, quizzes, and hands-on projects. Mentor and guide students through practical exercises and real-world applications. Provide timely feedback and support to students on their progress and performance. Stay up-to-date with the latest trends and advancements in data science and incorporate them into the curriculum. Conduct assessments and evaluations to measure the effectiveness of the training program. Adapt teaching methods and materials to meet the diverse needs of learners in both online and classroom settings. Requirements: Bachelor's or Master's degree in Computer Science, Statistics, or a related field. Proven experience as a Data Science Trainer or similar role. Proficiency in programming languages such as Python, C, and SQL. Strong knowledge of data science tools and platforms (e.g., Jupyter, TensorFlow, Pandas, NumPy, Scikit-learn). Excellent communication and presentation skills. Ability to explain complex concepts in a simple and clear manner. Experience in mentoring or teaching is preferred. Certifications in data science or related fields are a plus. Preferred Skills: Hands-on experience with data analytics, machine learning models, and big data tools. Familiarity with cloud platforms (AWS, Google Cloud, Azure) for data science projects. Ability to design project-based learning experiences for students. How to Apply: Interested candidates can share their CVs and the portfolios on hr@greamio.com with the subject line - "Application for Data Science Trainer (Indore) - Your Name" . Job Types: Full-time, Permanent, Fresher Pay: ₹28,000.00 - ₹32,000.00 per month Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Experience: Data science: 1 year (Required) Teaching: 1 year (Required) Language: Hindi (Required) English (Required) Work Location: In person
Posted 5 days ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
About The Role We are setting up our AI department at go4WorldBusiness to build next-generation, AI-powered products that transform how global businesses connect. We’re looking for exceptionally smart, fast learners with strong programming skills and excellent communication to join our team as AI Engineers. You don’t need to be an AI expert (yet!) — we’ll teach you that. What we need is someone with curiosity, clarity of thought, and an eagerness to work on the cutting edge of technology. If you enjoy problem-solving, writing clean code, and can communicate your ideas clearly, you’ll fit right in. Position : AI Engineer Experience : 1–3 years of hands-on experience in software development (Python preferred) What You’ll Do Most Of The Time Work closely with our AI team to build and ship internal tools and user-facing features powered by large language models and other AI technologies. Experiment with prompt engineering and refine interactions with AI models to drive business outcomes. Collaborate with cross-functional teams to define AI product features and deliver solutions that solve real-world problems. Learn and apply the latest AI and machine learning techniques to build scalable and impactful systems. Continuously prototype, test, and improve solutions based on business feedback. Communicate progress, blockers, and insights proactively to peers and stakeholders. Stay curious and up-to-date with developments in AI, programming, and product development. What You’ll Need To Qualify Strong communication and critical thinking skills — you're comfortable articulating technical and non-technical ideas clearly. Proficiency in one modern programming language (preferably Python). Ability to write clean, maintainable code and quickly understand new codebases. A proven track record of learning fast and solving hard problems — whether through academic projects, side gigs, or work experience. Bachelor's degree in Computer Science, Engineering, or a related field. Preferred Skills You don’t need to know all of these — but be excited to learn what you don’t: Experience using Python libraries for data processing (e.g., Pandas, NumPy) Familiarity with OpenAI, LangChain, or similar LLM-based tools Exposure to web development frameworks (Flask, FastAPI, etc.) Understanding of REST APIs and how to integrate them Interest in data science, ML pipelines, or vector databases Prior work or projects involving chatbots, recommendation systems, or automation Other Must-Have Skills Excellent communication and time management Attention to detail and ability to troubleshoot creatively Clear thinking and structured problem-solving Drive to meet deadlines and deliver high-quality work What You’ll Get Opportunity to be part of an AI team working on game-changing tools and products Hands-on mentorship and growth into advanced AI concepts and tools 5-day work week Annual loyalty bonus Six-monthly profit sharing bonus PF and Gratuity About The Company go4WorldBusiness is revolutionizing the international B2B import-export industry. If you’re excited about working at the intersection of AI and global trade, and want to build products used by millions, this is your chance. We offer industry-best salaries and perks, including annual loyalty and six-monthly profit sharing bonuses. If you’re smart, driven, and looking to work on AI that matters — apply now! Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Work Experience : 3+ years Salary: 21 LPA Location: Bengaluru Title : MLops Engineer Team Charter: The team in India comes with multi-disciplinary skillset, including but not limited to the following areas: Develop models and algorithms using Deep Learning and Computer Vision on the captured data to provide meaningful analysis to our customers. Some of the projects include – object detection, OCR, barcode scanning, stereovision, SLAM, 3D-reconstruction, action recognition etc. Develop integrated embedded systems for our drones – including embedded system platform development, camera and sensor integration, flight controller and motor control system development, etc. Architect and develop full stack software to interface between our solution and customer database and access – including database development, API development, UI/UX, storage, security and processing for data acquired by the drone. Integration and testing of various off the shelf sensors and other modules with drone and related software. Design algorithms related to autonomy and flight controls. Responsibilities: As a Machine Learning Ops (MLOps) engineer, you will be responsible for building and maintaining the next generation of Vimaan’s ML Platform and Infrastructure. MLOps will have a major contribution in making CV & ML offerings scalable across the company products. We are building all these data & model pipelines to scale Vimaan operations and MLOps Engineer will play a key role in enabling that. You will lead initiatives geared towards making the Computer Vision Engineers at Vimaan more productive. You will setup the infrastructure that powers the ML teams, thus simplifying the development and deployment cycles of ML models. You will help establish best practices for the ML pipeline and partner with other infrastructure ops teams to help champion them across the company. Build and maintain data pipelines - data ingestion, filtering, generating pre-populated annotations, etc. Build and maintain model pipelines - model monitoring, automated triggering of model (re)training, auto-deployment of models to producti on servers and edge devices. Own the cloud stack which comprises all ML resources. Establish standards and practices around MLOps, including governance, compliance, and data security. Collaborate on managing ML infrastructure costs. Qualifications: Deep quantitative/programming background with degree (Bachelors, Masters or Ph.D.) in a highly analytical discipline, like Statistics, Electrical,Electronics, Computer Science, Mathematics, Operations Research, etc. A minimum of 3 years of experience in managing machine learning projects end-to-end focused on MLOps. Experience with building RESTful APIs for monitoring build & production systems using automated monitoring of models and corresponding alarm tools. Experience with data versioning tools such as Data Version Control (DVC). Build and maintain data pipelines by using tools like Dagster, Airflow etc. Experience with containerizing and deploying ML models. Hands-on experience with autoML tools, experiment tracking, model management, version tracking & model training (MLflow, W&B, Neptune etc.), model hyperparameter optimization, model evaluation, and visualization (Tensorboard). Sound knowledge and experience with atleast one DL frameworks such as PyTorch, TensorFlow, Keras. Experience with container technologies (Docker, Kubernetes etc). Experience with cloud services. Working knowledge of SQL based databases. Hands on experience with Python scientific computing stack such as numpy, scipy, scikit-learn Familiarity with Linux and git. Detail oriented design, code debugging and problem-solving skills. Effective communication skills: discussing with peers and driving logic driven conclusions. Ability to perspicuously communicate complex technical/architectural problems and propose solutions for the same. How to stand out Prior experience in deploying ML & DL solutions as services Experience with multiple cloud services. Ability to collaborate effectively across functions in a fast-paced environment. Experience with technical documentation and presentation for effective dissemination of work. Engineering experience in distributed systems and data infrastructure. Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Roles & Responsibilities About the Role The Data Science department plays a pivotal role in our company, generating value to client by developing algorithms and analytical production-grade solutions. We leverage advanced techniques and algorithms to provide maximum value from data in all shapes and sizes (such as classification models, NLP, anomaly detection, graph theory, deep learning, and more). As a Data Scientist, you will assume the classic data-science role of an end-to-end project development and implementation practitioner. Being part of the team requires a mix of hard quantitative and analytical skills, solid background in statistical modeling and machine learning, a technical data-savvy nature, along with a passion for problem-solving and a desire to drive data-driven decision-making. What You'll Be Doing Data Exploration and Preprocessing: Collect, clean, and transform large, complex data sets from various sources to ensure data quality and integrity for analysis Statistical Analysis and Modeling: Apply statistical methods and mathematical models to identify patterns, trends, and relationships in data sets, and develop predictive models Machine Learning: Develop and implement machine learning algorithms, such as classification, regression, clustering, and deep learning, to solve business problems and improve processes Feature Engineering: Extract relevant features from structured and unstructured data sources, and design and engineer new features to enhance model performance Model Development and Evaluation: Build, train, and optimize machine learning models using state-of-the-art techniques, and evaluate model performance using appropriate metrics Data Visualization: Present complex analysis results in a clear and concise manner using data visualization techniques, and communicate insights to stakeholders effectively Collaborative Problem-Solving: Collaborate with cross-functional teams, including product managers, data engineers, software developers, and business stakeholders to identify data-driven solutions and implement them in production environments Research and Innovation: Stay up to date with the latest advancements in data science, machine learning, and related fields, and proactively explore new approaches to enhance the company's analytical capabilities Qualifications B.Sc (M.Sc is a plus) in Computer Science, Mathematics, Statistics, or a related field 3+ years of proven experience designing and implementing machine learning algorithms and successfully deploying them to production. Strong understanding and practical experience with various machine learning algorithms. Proficiency in Python, Experience with SQL and data manipulation tools (e.g., Pandas, NumPy) to extract, clean, and transform data for analysis Solid foundation in statistical concepts and techniques, including hypothesis testing, regression analysis, time series analysis, and experimental design Strong analytical and critical thinking skills to approach business problems, formulate hypotheses, and translate them into actionable solutions Proficiency in data visualization libraries, to create meaningful visual representations of complex data Excellent written and verbal communication skills to present complex findings and technical concepts to both technical and non-technical stakeholders Demonstrated ability to work effectively in cross-functional teams, collaborate with colleagues, and contribute to a positive work environment Advantages: Experience in the fraud domain Experience with Airflow, CircleCI, PySpark, Docker and K8S Senior Data Scientist Qualifications B.Sc (M.Sc is a plus) in Computer Science, Mathematics, Statistics, or a related field 5+ years of proven experience designing and implementing machine learning algorithms and successfully deploying them to production. Strong understanding and practical experience with various machine learning algorithms. Proficiency in Python, Experience with SQL and data manipulation tools (e.g., Pandas, NumPy) to extract, clean, and transform data for analysis Solid foundation in statistical concepts and techniques, including hypothesis testing, regression analysis, time series analysis, and experimental design Strong analytical and critical thinking skills to approach business problems, formulate hypotheses, and translate them into actionable solutions Proficiency in data visualization libraries, to create meaningful visual representations of complex data Excellent written and verbal communication skills to present complex findings and technical concepts to both technical and non-technical stakeholders Demonstrated ability to work effectively in cross-functional teams, collaborate with colleagues, and contribute to a positive work environment Advantages: Experience in the fraud domain Experience with Airflow, CircleCI, PySpark, Docker and K8S Data Scientist- Autonomous About Us Empowers businesses to unleash ecommerce growth by taking risk off the table. Many of the world’s biggest brands and publicly traded companies selling online rely on us for guaranteed protection against chargebacks, to fight fraud and policy abuse at scale, and to improve customer retention. Developed and managed by the largest team of ecommerce risk analysts, data scientists and researchers, AI-powered fraud and risk intelligence platform analyzes the individual behind each interaction to provide real-time decisions and robust identity-based insights. We are proud to work with incredible companies in virtually all industries including Wayfair, Acer, Gucci, Lorna Jane, GoPro, and many more We thrive in a collaborative work setting, alongside great people, to build and enhance products that matter. Abundant opportunities to create and contribute provide us with a sense of purpose that extends beyond ourselves, leaving a lasting impact. These sentiments capture why we choose every day. About The Role The Research and Data Science department plays a pivotal role in our company, generating value to client by developing algorithms and analytical production-grade solutions. We leverage advanced techniques and algorithms to provide maximum value from data in all shapes and sizes (such as classification models, NLP, anomaly detection, graph theory, deep learning, and more). As a Data Scientist, you will assume the classic data-science role of an end-to-end project development and implementation practitioner. Being part of the team requires a mix of hard quantitative and analytical skills, solid background in statistical modeling and machine learning, a technical data-savvy nature, along with a passion for problem-solving and a desire to drive data-driven decision-making. What You'll Be Doing Data Exploration and Preprocessing: Collect, clean, and transform large, complex data sets from various sources to ensure data quality and integrity for analysis Statistical Analysis and Modeling: Apply statistical methods and mathematical models to identify patterns, trends, and relationships in data sets, and develop predictive models Machine Learning: Develop and implement machine learning algorithms, such as classification, regression, clustering, and deep learning, to solve business problems and improve processes Feature Engineering: Extract relevant features from structured and unstructured data sources, and design and engineer new features to enhance model performance Model Development and Evaluation: Build, train, and optimize machine learning models using state-of-the-art techniques, and evaluate model performance using appropriate metrics Data Visualization: Present complex analysis results in a clear and concise manner using data visualization techniques, and communicate insights to stakeholders effectively Collaborative Problem-Solving: Collaborate with cross-functional teams, including product managers, data engineers, software developers, and business stakeholders to identify data-driven solutions and implement them in production environments Research and Innovation: Stay up to date with the latest advancements in data science, machine learning, and related fields, and proactively explore new approaches to enhance the company's analytical capabilities Qualifications B.Sc (M.Sc is a plus) in Statistics, Computer Science, Mathematics, or a related field 3+ years of proven experience designing and implementing machine learning algorithms and techniques in production grade Strong understanding and practical experience with various machine learning algorithms, such as linear regression, logistic regression, decision trees, similarity search, neural networks, and deep learning Proficiency in programming languages such as Python or R for data manipulation, statistical analysis, and machine learning model development Experience with SQL and data manipulation tools (e.g., Pandas, NumPy) to extract, clean, and transform data for analysis Solid foundation in statistical concepts and techniques, including hypothesis testing, regression analysis, time series analysis, and experimental design Strong analytical and critical thinking skills to approach business problems, formulate hypotheses, and translate them into actionable solutions Proficient in data visualization libraries (e.g., Matplotlib, Seaborn, ggplot) to create meaningful visual representations of complex data Excellent written and verbal communication skills to present complex findings and technical concepts to both technical and non-technical stakeholders Demonstrated ability to work effectively in cross-functional teams, collaborate with colleagues, and contribute to a positive work environmentAdvantages: Experience in the fraud domain Experience with Airflow, CircleCI, PySpark, Docker and K8S Experience 6-8 Years Skills Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): AI/ML Development, Data Science About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less
Posted 5 days ago
0 years
0 Lacs
India
Remote
🧠 Data Science Intern (Remote) 🌐 🔍 Do you love exploring data, building models, and uncovering insights with the power of machine learning and statistics? Ready to begin your data science journey — all from anywhere in the world? This role is for you! 📍 Location: Remote / Virtual 💼 Job Type: Internship (Unpaid) 🕒 Schedule: Flexible working hours 🌟 About the Role: We’re looking for a curious and driven Data Science Intern to join our remote team! This internship is perfect for students or recent graduates who want hands-on experience with real-world datasets, algorithms, and predictive analytics. You’ll work on data cleaning, exploration, modeling, and visualization to help solve real problems and deliver valuable insights — all while collaborating with a friendly, remote-first team. 🚀 What You’ll Gain: ✅ 100% Remote – Work from anywhere 🌍 ✅ Flexible Schedule – Learn and contribute on your time ⏰ ✅ Real-World Experience – Apply data science to real projects 📊 ✅ Skill Building – Sharpen your Python, ML, and data analysis toolkit 🛠️ ✅ Mentorship – Learn from experienced data professionals 👥 👀 Ideal Candidate: 🎓 Currently studying or recently graduated in Data Science, Computer Science, Statistics, or a related field 🧠 Solid understanding of statistics, data analysis, and machine learning concepts 🛠️ Comfortable with Python, Pandas, NumPy, and tools like Jupyter Notebook; knowledge of scikit-learn or TensorFlow is a plus 📈 Passionate about solving problems with data and eager to learn 💬 Self-motivated and able to work independently in a remote setting 📅 Apply By: June 15th Excited to take your first step into the data science world? We’d love to hear from you! Let’s build models and make discoveries through data — together! 📊🧠💡 Show more Show less
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Total Experience: 4 to 8 years Location: Bangalore/Mumbai Notice Period: Upto 30 days About the Role: We are on the lookout for a sharp, self-driven Data Analyst with a strong command of SQL, Python, and relational databases. If solving complex data problems, building efficient data pipelines, and collaborating across teams excites you - you’ll thrive in this role. What You will Do: • Craft complex SQL queries and stored procedures for analytics and reporting • Build robust Python scripts for automation, file handling, data cleanup, and transformation using Pandas/NumPy • Design and manage scalable relational database schemas • Apply basic data modeling (star/snowflake) to support reporting needs • Optimize performance of queries and scripts on large datasets • Collaborate with cross-functional teams to translate business needs into data solutions • Document processes for transparency and reuse Must-Have Skills: • Advanced SQL: joins, window functions, stored procedures, tuning • Python scripting: file I/O, string manipulation, exceptions, data wrangling • Solid grasp of relational DB design and performance optimization • Basic understanding of data modeling concepts • Sharp problem-solving and analytical thinking • Strong communicator with ability to engage stakeholders Good to have: • Exposure to cloud platforms (Azure, GCP, AWS, Snowflake) • Exposure to Big data platforms • Familiarity with Git and CI/CD pipelines • Knowledge of ETL/ELT workflows Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Python Developer Experience: 1 – 3 Years Location: Gachibowli, Hyderabad (Work From Office) Shift Timings: Regular Shift: 10:00 AM – 6:00 PM We are hiring a Python Developer who involves developing and maintaining risk analytics tools and automating reporting processes to support commodity risk management. Key Responsibilities: Develop, test, and maintain Python scripts for data analysis and reporting Write scalable, clean code using Pandas, NumPy, Matplotlib, and OOPS principles Collaborate with risk analysts to implement process improvements Document workflows and maintain SOPs in Confluence Optimize code performance and adapt to evolving business needs Requirements: Strong hands-on experience with Python, Pandas, NumPy, Matplotlib, and OOPS Good understanding of data structures and algorithms Experience with Excel and VBA is an added advantage Exposure to financial/market risk environments is preferred Excellent problem-solving, communication, and documentation skills Show more Show less
Posted 6 days ago
0.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
IT Full-Time Job ID: DGC00714 Chennai, Tamil Nadu 3-5 Yrs ₹3.5 - ₹06 Yearly Job description We are hiring for a full-time position - a Python developer for designing, developing, and maintaining software applications using Python We seek immediate joiners who can work in person from our Chennai office We seek immediate joiners who can work in person from our Chennai office You will work with the Global engineering team and with the latest Data Science tools and build next-generation products Key Responsibilities: 1. Design, develop, and maintain software applications using Python. 2. Collaborate with cross-functional teams to define, design, and ship new features. 3. Participate in code and design reviews to maintain code quality standards. 4. Troubleshoot and debug issues in existing applications. Qualifications: 1. Bachelors degree in Computer Science or Proven experience as a Python Developer. 2. Strong knowledge of Python and any of the libraries such as NumPy, Pandas, and Django. 3. Experience in Pyspark. 4. Familiarity with web development frameworks such as Flask or Django. 5. Experience with cloud-based platforms such as AWS or Azure
Posted 6 days ago
5.0 - 6.0 years
0 Lacs
Puducherry, Puducherry
On-site
Title: Python Developer Experience: 5-6 Years Location: Puducherry (On-site) Job Summary: We’re hiring a skilled Python Developer to design, build, and optimize scalable applications. You’ll collaborate with cross-functional teams to deliver high-performance solutions while adhering to best practices in coding, testing, and deployment. Key Responsibilities: ✔ Backend Development:Design and implement robust APIs using Django/Flask/FastAPI.Integrate with databases (PostgreSQL, MySQL, MongoDB).Optimize applications for speed and scalability. ✔ Cloud & DevOps:Deploy apps on AWS/Azure/GCP (Lambda, EC2, S3).Use Docker/Kubernetes for containerization.Implement CI/CD pipelines (GitHub Actions/Jenkins). ✔ Data & Automation:Develop ETL pipelines with Pandas, NumPy, Apache Airflow.Automate tasks using Python scripts. ✔ Collaboration:Work with frontend teams (React/JS) for full-stack integration.Mentor junior developers and conduct code reviews.Skills RequiredMust-Have:5+ years of Python development (OOP, async programming). Frameworks: Django/Flask/FastAPI.Databases: SQL/NoSQL (e.g., PostgreSQL, MongoDB).APIs: RESTful/gRPC, authentication (OAuth, JWT).DevOps: Docker, Git, CI/CD, AWS/Azure basics. Good-to-Have: Frontend basics: HTML/CSS, JavaScript. Perks & Benefits: Competitive salary How to Apply: Send your resume and GitHub/portfolio links to hr@cloudbeestech.com with the subject: "Python Developer Application ". Job Type: Full-time Pay: ₹40,000.00 - ₹50,000.00 per month Location Type: In-person Schedule: Day shift Work Location: In person Application Deadline: 15/06/2025
Posted 6 days ago
0 years
0 Lacs
India
Remote
CryptoChakra is a leading cryptocurrency analytics and education platform committed to demystifying digital asset markets for traders, investors, and enthusiasts worldwide. By integrating cutting-edge AI-driven predictions, blockchain analytics, and immersive learning modules, we empower users to navigate market volatility with confidence. Our platform combines advanced tools like Python, TensorFlow, and AWS to deliver actionable insights, risk assessments, and educational content that bridge the gap between complex data and strategic decision-making. As a remote-first innovator, we champion accessibility in decentralized finance, fostering a future where crypto literacy is universal. Position: Fresher Data Scientist Intern Remote | Full-Time Internship | Compensation: Paid/Unpaid based on suitability Role Summary Join CryptoChakra’s data science team to gain hands-on experience in transforming raw blockchain data into impactful insights. This role is tailored for recent graduates or students eager to apply foundational skills in machine learning, statistical analysis, and data storytelling to real-world crypto challenges. Key Responsibilities Data Processing: Clean and preprocess blockchain datasets from sources like Etherscan or CoinGecko using Python/R. Predictive Modeling: Assist in building and testing ML models for price forecasting or DeFi trend analysis. Insight Generation: Create visualizations (Tableau, Matplotlib) to simplify complex trends for educational content. Collaboration: Work with engineers and educators to refine analytics tools and tutorials. Documentation: Maintain clear records of methodologies and findings for team reviews. Who We’re Looking For Technical Skills Foundational knowledge of Python/R for data manipulation (Pandas, NumPy). Basic understanding of statistics (regression, hypothesis testing). Familiarity with data visualization tools (Tableau, Power BI) or libraries (Seaborn). Curiosity about blockchain technology, DeFi, or crypto markets. Soft Skills Eagerness to learn and adapt in a fast-paced remote environment. Strong problem-solving mindset and attention to detail. Ability to communicate technical concepts clearly. Preferred (Not Required) Academic projects involving data analysis or machine learning. Exposure to SQL, AWS, or big data tools. Pursuing a degree in Data Science, Computer Science, Statistics, or related fields. What We Offer Mentorship: Guidance from experienced data scientists and blockchain experts. Skill Development: Training in real-world tools like TensorFlow and Tableau. Portfolio Projects: Contribute to live projects featured on CryptoChakra’s platform. Flexibility: Remote work with adaptable hours for students. Show more Show less
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
Sadar, Uttar Pradesh, India
On-site
Profile : Machine Engineer Experience : 2 To 6 Years Requirement Python, pandas, NumPy, MySQL Data Visualization, Matplotlib, Seaborn, Data Cleaning Deep Learning : ANN, CNN, DNN, Back Propagation, TensorFlow 2.x, Keras Web scraping : various library Natural Language processing : Understanding, representation, classification & clustering NLTK,BOW, TFIDF, word2vec Machine Learning : Supervised, Unsupervised (All Algorithm), etc Location : Noida Sector 63 (Work From Office) Working Days : 5 Job Description The Machine Learning Lead will oversee the full lifecycle of machine learning projects, from concept to production deployment. The role requires strong technical expertise and leadership to guide teams in delivering impactful AI-driven solutions. Key Responsibilities Design and implement scalable ML models for various business applications. Manage end-to-end ML pipelines, including data preprocessing, model training, and deployment. Fine-tune foundation models and create small language models for deployment on AWS Inferentia and Trainium. Deploy ML models into production environments using platforms such as AWS. Lead the implementation of custom model development, ML pipelines, fine-tuning, and performance monitoring. Collaborate with cross-functional teams to identify and prioritize AI/ML use cases. Required Skills Expertise in ML frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong experience deploying ML models on cloud platforms, especially AWS. In-depth knowledge of SageMaker and SageMaker Pipelines. Familiarity with RAG-based architectures, agentic AI solutions, Inferentia, and Trainium. Advanced programming skills in Python with experience in APIs and microservices. Exceptional problem-solving abilities and a passion for innovation. (ref:hirist.tech) Show more Show less
Posted 6 days ago
15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a passionate and curious AI/ML Engineer (Fresher) to join our growing engineering team. This is a unique opportunity to work on real-world machine learning applications and contribute to building cutting-edge AI solutions. Your Responsibilities Assist in designing, developing, and training machine learning models using structured and unstructured data Collect, clean, and preprocess large datasets for model building Perform exploratory data analysis and statistical modeling Collaborate with senior data scientists and engineers to build scalable AI systems Run experiments, tune hyperparameters, and evaluate model performance using industry-standard metrics Document models, processes, and experiment results clearly and consistently Support in integrating AI/ML models into production environments Stay updated with the latest trends and techniques in machine learning, deep learning, and AI Participate in code reviews, sprint planning, and product discussions Follow best practices in software development, version control, and model reproducibility Skill Sets / Experience We Require Strong understanding of machine learning fundamentals (regression, classification, clustering, etc.) Hands-on experience with Python and ML libraries such as scikit-learn, pandas, NumPy Basic familiarity with deep learning frameworks like TensorFlow, PyTorch, or Keras Knowledge of data preprocessing, feature engineering, and model validation techniques Understanding of probability, statistics, and linear algebra Familiarity with tools like Jupyter, Git, and cloud-based notebooks Problem-solving mindset and eagerness to learn Good communication skills and the ability to work in a team Internship/project experience in AI/ML is a plus Education B.Tech / M.Tech / M.Sc in Computer Science, Data Science, Artificial Intelligence, or related field Relevant certifications in AI/ML (Coursera, edX, etc.) are a plus About Us TechAhead is a global digital transformation company with a strong presence in the USA and India. We specialize in AI-first product design thinking and bespoke development solutions . With over 15 years of proven expertise, we have partnered with Fortune 500 companies and leading global brands to drive digital innovation and deliver excellence. At TechAhead, we are committed to continuous learning, growth and crafting tailored solutions that meet the unique needs of our clients. Join us to shape the future of digital innovation worldwide and drive impactful results with cutting-edge AI tools and strategies! Show more Show less
Posted 6 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Data Scientist In this role, you’ll drive and embed the design and implementation of data science tools and methods, which harness our data to drive market-leading purpose customer solutions Day-to-day, you’ll act as a subject matter expert and articulate advanced data and analytics opportunities, bringing them to life through data visualisation If you’re ready for a new challenge, and are interested in identifying opportunities to support external customers by using your data science expertise, this could be the role for you We're offering this role at vice president level What you’ll do We’re looking for someone to understand the requirements and needs of our business stakeholders. You’ll develop good relationships with them, form hypotheses, and identify suitable data and analytics solutions to meet their needs and to achieve our business strategy. You’ll be maintaining and developing external curiosity around new and emerging trends within data science, keeping up to date with emerging trends and tooling and sharing updates within and outside of the team. You’ll Also Be Responsible For Proactively bringing together statistical, mathematical, machine-learning and software engineering skills to consider multiple solutions, techniques, and algorithms Implementing ethically sound models end-to-end and applying software engineering and a product development lens to complex business problems Working with and leading both direct reports and wider teams in an Agile way within multi-disciplinary data to achieve agreed project and Scrum outcomes Using your data translation skills to work closely with business stakeholders to define business questions, problems or opportunities that can be supported through advanced analytics Selecting, building, training, and testing complex machine models, considering model valuation, model risk, governance, and ethics throughout to implement and scale models The skills you’ll need To be successful in this role, you’ll need evidence of project implementation and work experience gained in a data-analysis-related field as part of a multi-disciplinary team. We’ll also expect you to hold an undergraduate or a master’s degree in Data science, Statistics, Computer science, or related field. You’ll also need an experience of 10 years with statistical software, database languages, big data technologies, cloud environments and machine learning on large data sets. And we’ll look to you to bring the ability to demonstrate leadership, self-direction and a willingness to both teach others and learn new techniques. Additionally, You’ll Need Experience of deploying machine learning models into a production environment Proficiency in Python and relevant libraries such as Pandas, NumPy, Scikit-learn coupled with experience in data visualisation tools. Extensive work experience with AWS Sage maker , including expertise in statistical data analysis, machine learning models, LLMs, and data management principles Effective verbal and written communication skills , the ability to adapt communication style to a specific audience and mentoring junior team members Show more Show less
Posted 6 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Data Scientist In this role, you’ll drive and embed the design and implementation of data science tools and methods, which harness our data to drive market-leading purpose customer solutions Day-to-day, you’ll act as a subject matter expert and articulate advanced data and analytics opportunities, bringing them to life through data visualisation If you’re ready for a new challenge, and are interested in identifying opportunities to support external customers by using your data science expertise, this could be the role for you We're offering this role at vice president level What you’ll do We’re looking for someone to understand the requirements and needs of our business stakeholders. You’ll develop good relationships with them, form hypotheses, and identify suitable data and analytics solutions to meet their needs and to achieve our business strategy. You’ll be maintaining and developing external curiosity around new and emerging trends within data science, keeping up to date with emerging trends and tooling and sharing updates within and outside of the team. You’ll also be responsible for: Proactively bringing together statistical, mathematical, machine-learning and software engineering skills to consider multiple solutions, techniques, and algorithms Implementing ethically sound models end-to-end and applying software engineering and a product development lens to complex business problems Working with and leading both direct reports and wider teams in an Agile way within multi-disciplinary data to achieve agreed project and Scrum outcomes Using your data translation skills to work closely with business stakeholders to define business questions, problems or opportunities that can be supported through advanced analytics Selecting, building, training, and testing complex machine models, considering model valuation, model risk, governance, and ethics throughout to implement and scale models The skills you’ll need To be successful in this role, you’ll need evidence of project implementation and work experience gained in a data-analysis-related field as part of a multi-disciplinary team. We’ll also expect you to hold an undergraduate or a master’s degree in Data science, Statistics, Computer science, or related field. You’ll also need an experience of 10 years with statistical software, database languages, big data technologies, cloud environments and machine learning on large data sets. And we’ll look to you to bring the ability to demonstrate leadership, self-direction and a willingness to both teach others and learn new techniques. Additionally, you’ll need: Experience of deploying machine learning models into a production environment Proficiency in Python and relevant libraries such as Pandas, NumPy, Scikit-learn coupled with experience in data visualisation tools. Extensive work experience with AWS Sage maker , including expertise in statistical data analysis, machine learning models, LLMs, and data management principles Effective verbal and written communication skills , the ability to adapt communication style to a specific audience and mentoring junior team members Show more Show less
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signalling and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars. Your future role Take on a new challenge and apply your data science expertise in a cutting-edge field. You’ll work alongside innovative and collaborative teammates. You'll play a key role in shaping Alstom’s next-generation data-driven solutions for the mobility industry. Day-to-day, you’ll work closely with teams across the business (engineering experts, data engineers, DevOps/MLOps engineers, and HMI designers), develop machine learning models, and contribute to the development of proof-of-concept applications in maintenance, operations, energy, and city flow domains. You’ll specifically take care of extracting key insights from mobility data using advanced tools and techniques, but also contribute to building AI/ML software modules to enhance customer performance and experience. We’ll look to you for: Designing and developing data-driven solutions, integrating engineering aspects and evaluating financial impacts Implementing feature selection/extraction methods to improve ML models and derive interpretable insights Building and optimizing machine learning models, including supervised and unsupervised approaches Assessing and enhancing the quality of incoming data and model outputs using appropriate evaluation metrics Collaborating with domain experts to define and meet business requirements Creating reports and explanatory materials to communicate insights effectively Evaluating opportunities from emerging mathematical approaches and data science technologies Applying strong testing and quality assurance practices All About You We value passion and attitude over experience. That’s why we don’t expect you to have every single skill. Instead, we’ve listed some that we think will help you succeed and grow in this role: Degree in computer science or engineering, supplemented by training in data science/ML or related disciplines Excellent knowledge of Python programming Understanding of modern statistics, including time-series and signal processing, text mining, or image processing Experience with Python data science stack (e.g., pandas, scikit-learn, keras, numpy, tensorflow) Strong knowledge of artificial intelligence and machine learning algorithms (e.g., classification, regression, dimensionality reduction, neural networks, deep learning) Experience in building and optimizing supervised and unsupervised ML models Experience with database management (SQL, ElasticSearch, etc.) Proficiency in a LINUX environment (shell scripting) [Optional: Experience in PHM, predictive maintenance, traffic modelling, NLP, video analytics, or sequence mining] [Optional: Familiarity with Git and release management] Things you’ll enjoy Join us on a life-long transformative journey – the rail industry is here to stay, so you can grow and develop new skills and experiences throughout your career. You’ll also: Enjoy stability, challenges and a long-term career free from boring daily routines Work with cutting-edge technologies to revolutionize digital mobility Collaborate with cross-functional teams and supportive colleagues Contribute to innovative projects in the mobility domain Utilise our flexible and inclusive working environment Steer your career in whatever direction you choose across functions and countries Benefit from our investment in your development, through award-winning learning programs Progress towards leadership or technical expert roles Benefit from a fair and dynamic reward package that recognizes your performance and potential, plus comprehensive and competitive social coverage (life, medical, pension) You don’t need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, you’ll be proud. If you’re up for the challenge, we’d love to hear from you! Important to note As a global business, we’re an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. We’re committed to creating an inclusive workplace for everyone. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signalling and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars. Your future role Take on a new challenge and apply your data science expertise in a cutting-edge field. You’ll work alongside innovative and collaborative teammates. You'll play a key role in shaping Alstom’s next-generation data-driven solutions for the mobility industry. Day-to-day, you’ll work closely with teams across the business (engineering experts, data engineers, DevOps/MLOps engineers, and HMI designers), develop machine learning models, and contribute to the development of proof-of-concept applications in maintenance, operations, energy, and city flow domains. You’ll specifically take care of extracting key insights from mobility data using advanced tools and techniques, but also contribute to building AI/ML software modules to enhance customer performance and experience. We’ll look to you for: Designing and developing data-driven solutions, integrating engineering aspects and evaluating financial impacts Implementing feature selection/extraction methods to improve ML models and derive interpretable insights Building and optimizing machine learning models, including supervised and unsupervised approaches Assessing and enhancing the quality of incoming data and model outputs using appropriate evaluation metrics Collaborating with domain experts to define and meet business requirements Creating reports and explanatory materials to communicate insights effectively Evaluating opportunities from emerging mathematical approaches and data science technologies Applying strong testing and quality assurance practices All About You We value passion and attitude over experience. That’s why we don’t expect you to have every single skill. Instead, we’ve listed some that we think will help you succeed and grow in this role: Degree in computer science or engineering, supplemented by training in data science/ML or related disciplines Excellent knowledge of Python programming Understanding of modern statistics, including time-series and signal processing, text mining, or image processing Experience with Python data science stack (e.g., pandas, scikit-learn, keras, numpy, tensorflow) Strong knowledge of artificial intelligence and machine learning algorithms (e.g., classification, regression, dimensionality reduction, neural networks, deep learning) Experience in building and optimizing supervised and unsupervised ML models Experience with database management (SQL, ElasticSearch, etc.) Proficiency in a LINUX environment (shell scripting) [Optional: Experience in PHM, predictive maintenance, traffic modelling, NLP, video analytics, or sequence mining] [Optional: Familiarity with Git and release management] Things you’ll enjoy Join us on a life-long transformative journey – the rail industry is here to stay, so you can grow and develop new skills and experiences throughout your career. You’ll also: Enjoy stability, challenges and a long-term career free from boring daily routines Work with cutting-edge technologies to revolutionize digital mobility Collaborate with cross-functional teams and supportive colleagues Contribute to innovative projects in the mobility domain Utilise our flexible and inclusive working environment Steer your career in whatever direction you choose across functions and countries Benefit from our investment in your development, through award-winning learning programs Progress towards leadership or technical expert roles Benefit from a fair and dynamic reward package that recognizes your performance and potential, plus comprehensive and competitive social coverage (life, medical, pension) You don’t need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, you’ll be proud. If you’re up for the challenge, we’d love to hear from you! Important to note As a global business, we’re an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. We’re committed to creating an inclusive workplace for everyone. Show more Show less
Posted 6 days ago
2.0 - 3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Python Developer Role Overview: We are seeking a motivated Python Developer to join our dynamic team. The ideal candidate will have 2-3 years of experience in developing scalable applications and be proficient in Python. You will work on creating, enhancing, and maintaining innovative software solutions while collaborating with cross-functional teams to deliver exceptional results Requirements Key Responsibilities: • Design and develop scalable backend services and RESTful APIs using Flask, FastAPI, or Django (DRF). • Collaborate with product managers and other developers to define software requirements and mentor Jr developers. • Develop robust integration with SQL (PostgreSQL preferred) and NoSQL databases like MongoDB. • Optimize performance and scalability of applications handling large volumes of data. • Write clean, modular, and well-documented code following best practices. • Contribute to architectural decisions and peer code reviews. • Apply best practices in containerization (Docker), and CI/CD pipelines (preferred but not mandatory). • Contribute to projects in Generative AI (GenAI) or Data Engineering depending on team needs and your expertise. Required Skills & Qualifications: • 2–3 years of professional experience in Python backend development. • Strong understanding of Flask; experience with FastAPI, Django, or DRF is a plus. • Solid experience working with PostgreSQL and MongoDB. • Familiarity with REST API development and JSON-based communication. • Experience in building and optimizing scalable web services or microservices. • Experience with redis and celery. • Hands-on experience with version control systems such as Git. • Exposure to GenAI (e.g., llama, OpenAI, HuggingFace, LangChain, Agentic AI etc) or Data Engineering (e.g., ETL pipelines, pandas, numpy) is a strong advantage. Good to Have: • Experience working with Docker and cloud platforms (AWS/GCP/Azure). • Familiarity with tools like Pyspark • Familiarity with ML model serving. Benefits Benefits Why Join Us? Work on cutting-edge technologies and impactful projects. Opportunities for career growth and development. Collaborative and inclusive work environment. Competitive salary and benefits package. Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
India
On-site
Coursera was launched in 2012 by Andrew Ng and Daphne Koller, with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 175 million registered learners as of March 31, 2025. Coursera partners with over 350 leading universities and industry leaders to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. About The Role We at Coursera are seeking a highly skilled and motivated AI Specialist with expertise in developing and deploying advanced AI solutions. The ideal candidate will have 3+ years of experience, with a strong focus on leveraging AI technologies to derive insights, build predictive models, and enhance platform capabilities. This role offers a unique opportunity to contribute to cutting-edge projects that transform the online learning experience. Key Responsibilities Deploy and customize AI/ML solutions using tools and platforms from Google AI, AWS, or other providers. Develop and optimize customer journey analytics to identify actionable insights and improve user experience. Design, implement, and optimize models for predictive analytics, information extraction, semantic parsing, and topic modelling. Perform comprehensive data cleaning and preprocessing to ensure high-quality inputs for model training and deployment. Build, maintain, and refine AI pipelines for data gathering, curation, model training, evaluation, and monitoring. Analyze large-scale datasets, including customer reviews, to derive insights for improving recommendation systems and platform features. Train and support team members in adopting and managing AI-driven tools and processes. Document solutions, workflows, and troubleshooting processes to ensure knowledge continuity. Stay informed on emerging AI/ML technologies to recommend suitable solutions for new use cases. Evaluate and enhance the quality of video and audio content using AI-driven techniques. Qualifications Education: Bachelor's degree in Computer Science, Machine Learning, or a related field (required). Experience: 3+ years of experience in AI/ML development, with a focus on predictive modelling and data-driven insights. Proven experience in deploying AI solutions using platforms like Google AI, AWS, Microsoft Azure, or similar. Proficiency in programming languages such as Python, Java, or similar for AI tool customization and deployment. Strong understanding of APIs, cloud services, and integration of AI tools with existing systems. Proficiency in building and scaling AI pipelines for data engineering, model training, and monitoring. Experience with frameworks and libraries for building AI agents, such as LangChain, AutoGen Familiarity with designing autonomous workflows using LLMs and external APIs Technical Skills: Programming: Advanced proficiency in Python, PyTorch, and TensorFlow, SciKit-Learn Data Engineering: Expertise in data cleaning, preprocessing, and handling large-scale datasets. Preferred experience with tools like AWS Glue, PySpark, and AWS S3. Cloud Technologies: Experience with AWS SageMaker, Google AI, Google Vertex AI, Databricks Strong SQL skills and advanced proficiency in statistical programming languages such as Python, along with experience using data manipulation libraries (e.g., Pandas, NumPy). Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here. Show more Show less
Posted 6 days ago
1.0 - 2.0 years
9 - 10 Lacs
Bengaluru
Work from Office
signdesk is looking for Consultant- AI Developer to join our dynamic team and embark on a rewarding career journey Develops AI-powered applications and tools for business use cases Builds machine learning models and optimizes their deployment Works closely with clients to define AI strategy and solutions Contributes to research, documentation, and training modules
Posted 6 days ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
As a Quantitative R&D Engineer, you’ll join the team responsible for designing, validating, and scaling trading strategies that interact directly with crypto markets. You’ll work at the intersection of statistics, code, and markets — exploring data, prototyping logic, and building tools that convert ideas into capital allocation engines. 🔍 What You’ll Work On Analyze on-chain and market data to identify inefficiencies and behavioural patterns. Develop and prototype systematic trading strategies using statistical and ML-based techniques. Contribute to signal research, backtesting infrastructure, and strategy evaluation frameworks. Monitor and interpret DeFi protocol mechanics (AMMs, perps, lending markets) for alpha generation. Collaborate with engineers to turn research into production-grade, automated trading systems. Explore MEV, cross-chain arbitrage, and predictive modelling for on-chain behaviour. ✅ Who You Are You’re curious, analytical, and eager to work where code meets markets. You enjoy designing systems that act autonomously and uncover edge in complex, noisy data. You’re comfortable working in an experimental, high-ownership environment. Ideal Traits Proficiency in Python and experience working with data (Pandas, NumPy, etc.) Understanding of probability, statistics, or ML concepts. Strong interest in markets, trading, or algorithmic systems. Self-driven and comfortable with ambiguity, iteration, and fast learning cycles. 📚 Bonus Points For Experience with backtesting, feature engineering, or strategy evaluation. Familiarity with crypto primitives (AMMs, perps, mempools, MEV, etc.) Projects or research involving trading bots, alpha signals, or data modelling. Use of on-chain analytics tools (Dune, The Graph, Nansen, etc.) Participation in hackathons, quant competitions, or open-source work. 🎁 What You’ll Gain Work on live systems that deploy real capital in 24/7 crypto markets. Access to proprietary infrastructure, backtesting tools, and strategy libraries. Mentorship from experienced quants, traders, and DeFi engineers. Freedom to explore, test, and scale your ideas into real-world performance. A high-trust, fast-moving team focused on pushing boundaries. Skills:- Python, pandas, Blockchain, GenAI and Generative AI Show more Show less
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Numpy is a widely used library in Python for numerical computing and data analysis. In India, there is a growing demand for professionals with expertise in numpy. Job seekers in this field can find exciting opportunities across various industries. Let's explore the numpy job market in India in more detail.
The average salary range for numpy professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
Typically, a career in numpy progresses as follows: - Junior Developer - Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead
In addition to numpy, professionals in this field are often expected to have knowledge of: - Pandas - Scikit-learn - Matplotlib - Data visualization
np.where()
function in numpy. (medium)np.array
and np.matrix
in numpy. (advanced)As you explore job opportunities in the field of numpy in India, remember to keep honing your skills and stay updated with the latest developments in the industry. By preparing thoroughly and applying confidently, you can land the numpy job of your dreams!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2