Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 4.0 years
0 Lacs
karnataka
On-site
As a Deep Learning Engineer Intern, you will have the opportunity to work on our core ML stack, focusing on areas such as Computer Vision, Recommendation Systems, and LLM-based Features (optional). Working closely with our senior engineers, you will gain hands-on experience with real data, address edge-case challenges, and rapidly iterate on prototypes that drive our food-robotic machine. You will be an integral part of a focused 2-person full-time ML team, serving as our first intern and reporting directly to the Lead. This small team size ensures significant mentorship opportunities and allows you to make a direct impact on product development. Your responsibilities will include: - Working on tasks such as Ingredient Segmentation, Recipe Similarity, and Vision-Language Models - Packaging and deploying models for on-device inference and AWS services - Supporting data collection and annotation, refining datasets based on model performance - Iterating on experiments, tuning hyperparameters, testing augmentations, and documenting results We are looking for candidates with: - Strong fundamentals in Deep Learning (Transformers, CNNs) and classical ML (Logistic / Linear regression) - Hands-on experience with PyTorch in academic projects, internships, or personal work - Proficiency in Python and the broader ML ecosystem (NumPy, pandas) - Solid understanding of training pipelines, evaluation metrics, and experimental rigor - Eagerness to learn new domains such as NLP and recommendations Nice to have skills include prior exposure to LLMs or multimodal models, experience with computer vision in challenging real-world environments, and projects involving noisy, real-world datasets or edge-case-heavy scenarios. Working with us, you will enjoy: - Early ownership of ML features, contributing end-to-end to projects - Hands-on experience with real robotic systems in homes, witnessing your models in action - Direct access to senior ML engineers for mentorship in a small, focused team environment - Rapid feedback loops that allow you to prototype, deploy, and learn quickly - A clear growth path with strong potential for full-time conversion into our rapidly expanding ML team We are looking for candidates who are currently enrolled in a B.Tech or M.Tech program (3-4 years into their degree), possess strong math and ML fundamentals beyond using off-the-shelf libraries, thrive in an ambiguous, fast-paced startup environment, and are always curious, enjoying building, experimenting, and sharing insights. Location: Bangalore (4 days WFO & 1 day WFH) Duration: 3-6 month internship Compensation: 1,00,000/month To apply, ensure your resume includes a project portfolio with the following: - GitHub profile showcasing ML/deep learning projects - Public codebase from coursework, internships, or personal work - Kaggle notebooks or competition solutions with leaderboard standings,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Python Backend Engineer specializing in AWS with a focus on GenAI & ML, you will be responsible for designing, developing, and maintaining intelligent backend systems and AI-driven applications. Your primary objective will be to build and scale backend systems while integrating AI/ML models using Django or FastAPI. You will deploy machine learning and GenAI models with frameworks like TensorFlow, PyTorch, or Scikit-learn, and utilize Langchain for GenAI pipelines. Experience with LangGraph will be advantageous in this role. Collaboration with data scientists, DevOps, and architects is essential to integrate models into production. You will be working with AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Additionally, managing CI/CD pipelines for backend and model deployments will be a key part of your responsibilities. Ensuring the performance, scalability, and security of applications in cloud environments will also fall under your purview. To be successful in this role, you should have at least 5 years of hands-on experience in Python backend development and a strong background in building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services is crucial, along with a solid understanding of ML/AI concepts and model deployment practices. Familiarity with ML libraries like TensorFlow, PyTorch, or Scikit-learn is required, as well as experience with Langchain for GenAI applications. Experience with DevOps tools such as Docker, Kubernetes, Git, Jenkins, and Terraform will be beneficial. An understanding of microservices architecture, CI/CD workflows, and agile development practices is also desirable. Nice to have skills include knowledge of LangGraph, LLMs, embeddings, and vector databases, as well as exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Additionally, familiarity with MLOps tools and practices for model monitoring, versioning, and retraining will be advantageous. This is a full-time, permanent position with benefits such as health insurance and provident fund. The work location is in-person, and the schedule involves day shifts from Monday to Friday in the morning. If you are interested in this opportunity, please contact the employer at +91 9966550640.,
Posted 1 day ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have 9+ years of experience and be located in Chennai. You must possess in-depth knowledge of Python and have good experience in creating APIs using FastAPI. It is essential to have exposure to data libraries like Pandas, DataFrame, NumPy etc., as well as knowledge in Apache open-source components. Experience with Apache Spark, Lakehouse architecture, and Open table formats is required. You should also have knowledge in automated unit testing, preferably using PyTest, and exposure in distributed computing. Experience working in a Linux environment is necessary, and working knowledge in Kubernetes would be an added advantage. Basic exposure to ML and MLOps would also be advantageous.,
Posted 1 day ago
7.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Title: Manager – Senior ML Engineer (Full Stack) About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Job Summary : The Manager – Senior ML Engineer (Full Stack) will be responsible for leading the development and integration of Generative AI (GenAI) technologies, writing code modules, and managing full-stack development projects. The ideal candidate will have a strong background in Python and a proven track record in machine learning and full-stack development. Required Skills Strong proficiency in Python programming. Experience with data analysis and visualization libraries like Pandas, NumPy, Matplotlib, and Seaborn. Proven experience in machine learning and AI development. Experience with Generative AI (GenAI) development and integration. Full-stack development experience, including front-end and back-end technologies. Proficiency in web development frameworks such as Django or Flask. Knowledge of machine learning frameworks such as TensorFlow, Keras, PyTorch, or Scikit-learn. Experience with RESTful APIs and web services integration. Familiarity with SQL and NoSQL databases, such as PostgreSQL, MySQL, MongoDB, or Redis. Experience with cloud platforms like AWS, Azure, or Google Cloud. Knowledge of DevOps practices and tools like Docker, Kubernetes, Jenkins, and Git. Proficiency in writing unit tests and using debugging tools. Effective communication and interpersonal skills. Ability to work in a fast-paced, dynamic environment. Knowledge of software development best practices and methodologies. Key Responsibilities Lead the development and integration of Generative AI (GenAI) technologies to enhance our product offerings. Write, review, and maintain code modules, ensuring high-quality and efficient code. Oversee full-stack development projects, ensuring seamless integration and optimal performance. Collaborate with cross-functional teams to define project requirements, scope, and deliverables. Manage and mentor a team of developers and engineers, providing guidance and support to achieve project goals. Stay updated with the latest industry trends and technologies to drive innovation within the team. Ensure compliance with best practices in software development, security, and data privacy. Troubleshoot and resolve technical issues in a timely manner. Qualifications Bachelor’s degree in computer science or an Engineering degree Minimum of 7 years of experience in machine learning engineering or a similar role. Demonstrated experience in managing technology projects from inception to completion.
Posted 1 day ago
0 years
0 Lacs
India
Remote
Data Analyst Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Internship Certificate | Letter of Recommendation | Full-Time Offer (Performance-Based) About INLIGHN TECH INLIGHN TECH is a leading edtech company that provides immersive, project-based virtual internships designed to help students and graduates gain hands-on experience in high-demand tech fields. The Data Analyst Internship is crafted to help you build the analytical and technical skills needed to work with real-world datasets and derive meaningful business insights. 🚀 Internship Overview As a Data Analyst Intern , you’ll work with diverse datasets to analyze trends, visualize results, and support strategic decisions. You’ll use modern tools and languages like Excel, SQL, and Python to clean, process, and present data. 🔧 Key Responsibilities Collect, clean, and organize data for analysis Perform exploratory data analysis (EDA) to uncover patterns and insights Develop interactive dashboards and reports using Power BI , Tableau , or Google Data Studio Use SQL for data querying and manipulation Apply basic statistical methods to support data-driven decisions Present findings clearly to technical and non-technical audiences Collaborate with peers and mentors during review sessions and feedback meetings ✅ Qualifications Pursuing or recently completed a degree in Data Analytics, Statistics, Computer Science, Business , or a related field Proficiency in MS Excel and working knowledge of SQL Familiarity with Python (Pandas, NumPy) or data visualization tools is a plus Strong attention to detail and problem-solving ability Passion for working with data and delivering actionable insights Good communication and collaboration skills 🎓 What You’ll Gain Hands-on experience with real-world data analysis projects A portfolio of dashboards and reports for your resume Internship Certificate upon successful completion Letter of Recommendation for high performers Opportunity for a Full-Time Role based on performance Practical exposure to industry-relevant tools and workflows
Posted 1 day ago
0 years
0 Lacs
India
Remote
Location : Remote (India-based preferred) Type : Internship (3–6 months) | Stipend: Yes Start Date : Immediate About HoGo Fresh HoGo Fresh is a mission-driven agri-tech company transforming the way food moves from farms to consumers. We leverage technology to create clean, transparent, and climate-resilient supply chains—empowering farmers while reducing food waste. Role Overview We are looking for a curious and analytical Data Analyst Intern who’s not only comfortable working with data—but also understands how data feeds into AI/ML models . This internship offers hands-on experience with real-world datasets from agriculture, supply chain, and customer behavior—all in a fast-paced, impact-driven environment. Responsibilities Collect, clean, and analyze structured and unstructured data from various sources (farm, logistics, consumer apps). Build visual dashboards and basic data pipelines using Excel, SQL, or Python tools (e.g., Pandas, NumPy). Assist the AI team in preparing datasets for training and evaluation of ML models (e.g., yield prediction, pricing, routing). Collaborate with tech, R&D, and operations teams to identify data-driven insights. Document findings and create presentations to support product or business decisions. You Should Have Pursuing or completed a degree in Data Science, Statistics, Computer Science, Agriculture Informatics, or a related field. Proficiency in Excel and at least one of: Python, R, or SQL. Understanding of data preparation for AI/ML models (e.g., feature selection, labeling, preprocessing). Strong analytical thinking, attention to detail, and ability to interpret trends and patterns. Bonus: Familiarity with visualization tools (Power BI, Tableau, Google Data Studio) or ML libraries (scikit-learn, TensorFlow). What You’ll Gain Exposure to real agri-tech data challenges at scale Mentorship in AI data workflows, model prep, and business intelligence Opportunity to collaborate cross-functionally in a purpose-driven startup Potential for a Pre-Placement Offer (PPO) based on performance Comprehensive health and wellness benefits tailored to interns, such as free webinars and consultations with industry professionals. Access to a vast network of agricultural and technological entrepreneurs for guidance on career planning. Monthly learning workshops focusing on leadership, innovation, and applied technologies in agri-supply chains. Skills: data cleaning,data analysis,data visualization,python,datasets,analytical thinking,attention to detail,data,supply,agriculture,sql,excel,models
Posted 1 day ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking highly skilled and motivated AI Engineers with Strong Python experience and familiar with prompt engineering and LLM integrations to join our Innovations Team. The team is responsible for exploring emerging technologies, building proof-of-concept (PoC) applications, and delivering cutting-edge AI/ML solutions that drive strategic transformation and operational efficiency. About the Role As a core member of the Innovations Team, you will work on AI-powered products, rapid prototyping, and intelligent automation initiatives across domains such as mortgage tech, document intelligence, and generative AI. Responsibilities Design, develop, and deploy scalable AI/ML solutions and prototypes. Build data pipelines, clean datasets, and engineer features for training models. Apply deep learning, NLP, and classical ML techniques. Integrate AI models into backend services using Python (e.g., FastAPI, Flask). Collaborate with cross-functional teams (e.g., UI/UX, DevOps, product managers). Evaluate and experiment with emerging open-source models (e.g., LLaMA, Mistral, GPT). Stay current with advancements in AI/ML and suggest opportunities for innovation. Qualifications Educational Qualification: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Certifications in AI/ML or cloud platforms (Azure ML, TensorFlow Developer, etc.) are a plus. Required Skills Technical Skills: Programming Languages: Python (strong proficiency), experience with NumPy, Pandas, Scikit-learn. AI/ML Frameworks: TensorFlow, PyTorch, HuggingFace Transformers, OpenCV (nice to have). NLP & LLMs: Experience with language models, embeddings, fine-tuning, and vector search. Prompt Engineering: Experience designing and optimizing prompts for LLMs (e.g., GPT, Claude, LLaMA) for various tasks such as summarization, Q&A, document extraction, and multi-agent orchestration. Backend Development: FastAPI or Flask for model deployment and REST APIs. Data Handling: Experience in data preprocessing, feature engineering, and handling large datasets. Version Control: Git and GitHub. Database Experience: SQL and NoSQL databases; vector DBs like FAISS, ChromaDB, or Qdrant preferred. Nice to Have (Optional): Experience with Docker, Kubernetes, or cloud environments (Azure, AWS). Familiarity with LangChain, LlamaIndex, or multi-agent frameworks (CrewAI, AutoGen). Soft Skills: Strong problem-solving and analytical thinking. Eagerness to experiment and explore new technologies. Excellent communication and teamwork skills. Ability to work independently in a fast-paced, dynamic environment. Innovation mindset with a focus on rapid prototyping and proof-of-concepts. Experience Level: 3–7 years, only Work from Office (Chennai location)
Posted 1 day ago
0 years
0 Lacs
India
Remote
Artificial Intelligence & Machine Learning Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Performance-Based) About INLIGHN TECH INLIGHN TECH is a future-ready edtech company providing hands-on, project-based virtual internships to help students and graduates build practical skills. Our AI & ML Internship is tailored for aspiring professionals looking to gain real-world experience in artificial intelligence and machine learning through coding, experimentation, and project development. 🚀 Internship Overview As an AI & Machine Learning Intern , you'll work on real datasets and implement ML algorithms to solve classification, regression, and clustering problems. You'll get the opportunity to apply deep learning models, understand the ML pipeline, and contribute to AI-powered projects under expert guidance. 🔧 Key Responsibilities Clean and preprocess datasets for machine learning Build and evaluate models using libraries like Scikit-learn , TensorFlow , or PyTorch Implement algorithms such as Linear Regression, Decision Trees, SVM, K-Means, and Neural Networks Work on real-world AI applications like image recognition, sentiment analysis, or recommendation systems Visualize model performance using Matplotlib , Seaborn , or Power BI Collaborate in a team environment using Git/GitHub and participate in code reviews Document model assumptions, evaluation results, and future improvements ✅ Qualifications Pursuing or recently completed a degree in Computer Science, AI, Data Science , or a related field Strong foundation in Python , statistics , and machine learning concepts Familiarity with Numpy, Pandas, Scikit-learn , and deep learning frameworks Understanding of the ML lifecycle , from data collection to deployment Good problem-solving skills and an eagerness to learn Bonus: experience with NLP, computer vision , or ML model deployment (Flask/Streamlit) 🎓 What You’ll Gain Hands-on experience with real-time AI and ML projects A portfolio of ML models, visualizations, and predictive tools Internship Certificate upon successful completion Letter of Recommendation for top performers Opportunity for a Full-Time Offer based on your work A strong foundation to pursue roles in ML engineering, data science, and AI research
Posted 1 day ago
0 years
0 Lacs
India
Remote
Data Science Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Performance-Based) About INLIGHN TECH INLIGHN TECH is a future-driven edtech platform that provides real-world, project-based virtual internships to students and aspiring professionals. The Data Science Internship is designed to help you gain hands-on experience with real datasets, machine learning models, and data-driven solutions for real-world problems. 🚀 Internship Overview As a Data Science Intern , you'll work with structured and unstructured data, explore patterns and trends, and implement predictive models. This internship emphasizes practical exposure to data preprocessing, algorithm development, and result interpretation using tools widely used in the industry. 🔧 Key Responsibilities Collect, clean, and preprocess large datasets for analysis Apply machine learning algorithms to build predictive models Perform exploratory data analysis (EDA) using Python (Pandas, Matplotlib, Seaborn) Use tools like Jupyter Notebook , Scikit-learn , and TensorFlow for modeling Visualize data insights with Power BI, Tableau , or Matplotlib Interpret and communicate results in a meaningful, business-focused way Document workflows, challenges, and outcomes during the project ✅ Qualifications Pursuing or recently completed a degree in Data Science, Computer Science, Statistics , or a related field Proficient in Python , with experience in libraries like Pandas, NumPy, and Scikit-learn Understanding of statistics , data structures , and machine learning concepts Familiarity with SQL and data visualization tools Strong analytical mindset and attention to detail Ability to explain complex ideas in a simple, clear way 🎓 What You’ll Gain Hands-on experience in real-world data science projects A portfolio of machine learning and analytics projects Internship Certificate upon successful completion Letter of Recommendation for high performers Opportunity for a Full-Time Role based on performance Foundation to pursue further roles in data science, AI, and analytics
Posted 1 day ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Who You'll Work With Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well-being for you and your family. Your Impact You will be developing PlanetView, a Software-as-a-Service platform which helps the financial sector understand and manage climate change risks and quantify carbon emissions. You will work alongside our physical and transition risk modelling teams, following agile processes, to bring analytical approaches and features to production. We work with financial data, terabytes of global climate data and a wide range of environmental, social and corporate governance (ESG) data and integrate them in our class-leading advanced economic models. In this role, you will be responsible to maintain and scale capabilities of Django, Django REST Framework based core application, and several backend microservices which are based on FastAPI and Pydantic. You’ll have the opportunity to significantly influence the design of our backend processes. You will also manage your day-to-day priorities, time and commitments within your team setting while ensuring that technical standards and best practices are exercised. Lastly, you will apply new knowledge and innovation to the existing codebase. At Planetrics, you will be at the forefront of new technologies, applying best practices into development of PlanetView solution. You will deliver a real impact by identifying potential risks and capturing strategic opportunities of different climate-change policies and climate-related technologies worldwide. You will work in the environment that puts sustainability, diversity and digital transformation at the heart of what we do Your Qualifications and Skills Degree in Computer Science, Engineering, Mathematics, Quantitative Methods, or a related field. Proven track record of developing and maintaining production-level code. Strong proficiency in Python, with a focus on writing clean, efficient, and production-ready code. Deep expertise in building enterprise application using Django, Django REST Framework, FastAPI ,and Pydantic frameworks. Extensive experience with relational databases, SQL and Django ORM. Working knowledge of data analytics and visualisation Python libraries such as Pandas, Numpy, Polars, Plotly. Hands-on experience designing and implementing microservice architectures and distributed systems. Practical knowledge of AWS services such as EKS, RDS, Lambda, S3, DynamoDB, ElasticCache, SQS, AWS Batch and S3. Passion for automation, with experience in containerisation (e.g. Docker), shell scripting, and CI/CD pipelines, including GitHub actions. Solid understanding of software engineering best practices throughout the development lifecycle, including Agile methodologies, coding standards, peer code reviews, version control, build processes, testing, and deployment.
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Dear all, We are looking for highly motivated PhD freshers in Data Science, AI/ML, or related disciplines to join our team as AI/ML Research . This role is designed for individuals with a strong academic foundation and hands-on project experience , eager to apply their knowledge in real-world AI and Large Language Model (LLM) applications . Key Responsibilities: Research, design, and implement models in AI/ML/NLP , with a focus on Large Language Models (LLMs) . Collaborate with cross-functional teams to integrate models into production or research pipelines. Contribute to codebases and perform experiments using Python and relevant ML libraries. Develop proof-of-concept (PoC) applications or tools based on self-designed research or side projects. Required Qualifications: PhD (completed) in Data Science, Computer Science, Artificial Intelligence, Machine Learning, or related fields. Strong proficiency in Python and libraries such as NumPy, pandas, scikit-learn, PyTorch, or TensorFlow . Practical understanding of foundational ML and deep learning techniques (e.g., regression, classification, transformers, etc.). Experience with LLMs (e.g., GPT, BERT, LLaMA) – either academic research or side projects. Demonstrated ability to build end-to-end AI/ML/NLP projects (e.g., GitHub projects, open-source contributions, thesis work). If you are interested, kindly share your updated CV by emailing it to recruitment.india@maxisit.com.
Posted 1 day ago
6.0 years
20 - 30 Lacs
Hyderabad, Telangana, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 2000000 - Rs 3000000 (ie INR 20-30 LPA) Min Experience: 6 years Location: Telangana, Chennai JobType: full-time We are seeking a skilled and motivated Machine Learning Developer to design and implement AI/ML models, contribute to proof-of-concept projects, and collaborate with cross-functional teams to deliver impactful solutions. This role requires a strong foundation in machine learning, Python programming, and cloud technologies, particularly Google Cloud Platform (GCP). Requirements Key Responsibilities: Design, develop, and deploy machine learning and AI models and algorithms tailored to business needs Build and deliver Proof of Concept (POC) applications to showcase the feasibility and value of AI solutions Write clean, maintainable, and well-documented Python code Collaborate with data engineers to ensure high-quality data pipelines for training and evaluation Partner with senior developers to understand project goals and contribute to architectural and technical decisions Debug and optimize machine learning models and related applications Stay current with emerging trends and technologies in AI/ML Use machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn for model development Develop and deploy ML models on Google Cloud Platform (GCP) Apply effective data preprocessing and feature engineering techniques using libraries such as Pandas and NumPy Utilize Vertex AI for end-to-end model training, deployment, and monitoring Integrate Google Gemini for advanced AI features when applicable Qualifications: Bachelor's degree in Computer Science, Artificial Intelligence, or a related discipline Minimum of 3 years of experience in designing and implementing AI/ML solutions Proficiency in Python with strong coding practices Hands-on experience with ML frameworks like TensorFlow, PyTorch, or Scikit-learn Solid understanding of machine learning algorithms and concepts Ability to work both independently and collaboratively within a team Strong analytical and problem-solving skills Effective verbal and written communication skills Experience with Google Cloud Platform (GCP) is preferred Familiarity with Vertex AI and Google Gemini is an added advantage Key Skills: Machine Learning, Artificial Intelligence, Python, TensorFlow, PyTorch, Scikit-learn, GCP, Vertex AI, Pandas, NumPy, Google Gemini
Posted 1 day ago
6.0 years
20 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 2000000 - Rs 3000000 (ie INR 20-30 LPA) Min Experience: 6 years Location: Telangana, Chennai JobType: full-time We are seeking a skilled and motivated Machine Learning Developer to design and implement AI/ML models, contribute to proof-of-concept projects, and collaborate with cross-functional teams to deliver impactful solutions. This role requires a strong foundation in machine learning, Python programming, and cloud technologies, particularly Google Cloud Platform (GCP). Requirements Key Responsibilities: Design, develop, and deploy machine learning and AI models and algorithms tailored to business needs Build and deliver Proof of Concept (POC) applications to showcase the feasibility and value of AI solutions Write clean, maintainable, and well-documented Python code Collaborate with data engineers to ensure high-quality data pipelines for training and evaluation Partner with senior developers to understand project goals and contribute to architectural and technical decisions Debug and optimize machine learning models and related applications Stay current with emerging trends and technologies in AI/ML Use machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn for model development Develop and deploy ML models on Google Cloud Platform (GCP) Apply effective data preprocessing and feature engineering techniques using libraries such as Pandas and NumPy Utilize Vertex AI for end-to-end model training, deployment, and monitoring Integrate Google Gemini for advanced AI features when applicable Qualifications: Bachelor's degree in Computer Science, Artificial Intelligence, or a related discipline Minimum of 3 years of experience in designing and implementing AI/ML solutions Proficiency in Python with strong coding practices Hands-on experience with ML frameworks like TensorFlow, PyTorch, or Scikit-learn Solid understanding of machine learning algorithms and concepts Ability to work both independently and collaboratively within a team Strong analytical and problem-solving skills Effective verbal and written communication skills Experience with Google Cloud Platform (GCP) is preferred Familiarity with Vertex AI and Google Gemini is an added advantage Key Skills: Machine Learning, Artificial Intelligence, Python, TensorFlow, PyTorch, Scikit-learn, GCP, Vertex AI, Pandas, NumPy, Google Gemini
Posted 1 day ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Proven experience with GenAI, technologies with overall 6+years of experience Design, fine-tune, and deploy state-of-the-art Generative AI models for various business use cases. Train and customize large language models (LLMs) for domain-specific requirements. Work with Natural Language Processing (NLP), text-to-image, and other generative AI technologies. Leverage Azure OpenAI Service to design and deploy Generative AI models. Strong proficiency in Python and experience with AI/ML libraries such as Pandas, Numpy, and PyTorch.
Posted 1 day ago
0 years
0 Lacs
India
Remote
Position: Data Science Intern Company: IGNITERN Location: Remote Duration: 3–6 Months Stipend: Unpaid Type: Internship (Certificate Provided) About IGNITERN: IGNITERN is a forward-thinking digital solutions company dedicated to nurturing emerging talent in the tech space. We focus on real-world problem-solving, innovation, and offering early-career professionals an opportunity to explore and grow in their field. Internship Overview: We are seeking Data Science Interns who are passionate about data, eager to build models, analyze trends, and create insights that support business strategies. If you're enthusiastic about applying machine learning and statistical techniques in real-world contexts, this internship is for you. Roles & Responsibilities: Assist in collecting, cleaning, and preparing data for analysis Apply machine learning algorithms, statistical models, and exploratory data analysis techniques Collaborate with cross-functional teams to define problems and propose data-driven solutions Develop visualizations, predictive models, and performance reports Document processes and present findings in a clear, actionable format Who Can Apply: Students or recent graduates in Computer Science, Data Science, Engineering, Statistics, or a related field Must be available for at least 3 months This is a remote and unpaid internship focused on skill-building and learning Required Skills: Proficiency in Python (NumPy, pandas, etc.) Knowledge of machine learning concepts and basic algorithms Familiarity with data visualization tools like Matplotlib or Seaborn Strong analytical thinking and attention to detail Good written and verbal communication skills What You'll Gain: Practical, hands-on experience working on live projects Mentorship and collaboration with experienced data professionals Internship Certificate upon successful completion Portfolio development with real-world data science tasks Opportunity to grow in a supportive, learning-focused environment How to Apply: Send your resume and any supporting work (GitHub, Kaggle, reports, etc.) to: 📩 hello@ignitern.in Shortlisted candidates will be contacted for a brief assignment or interview.
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
India
On-site
Description Brief Job Overview The Digital & Innovation group at USP is seeking a Data Scientist with skills in advanced analytics (predictive modeling, machine learning, natural language processing) and data visualization to work on projects that drive innovations and deliver digital solutions. We are seeking someone who understands the power of data and enjoys communicating the insights and help create an unified experience across our ecosystem. How will YOU create impact here at USP? In this role at USP, you contribute to USP's public health mission of increasing equitable access to high-quality, safe medicine and improving global health through public standards and related programs. In addition, as part of our commitment to our employees, Global, People, and Culture, in partnership with the Equity Office, regularly invests in the professional development of all people managers. This includes training in inclusive management styles and other competencies necessary to ensure engaged and productive work environments. Use exploratory data analysis to spot anomalies, understand patterns, test hypotheses, or check assumptions. Apply various ML techniques to perform classification or regression tasks to drive business impact and address identified needs in an agile manner. Use natural language processing techniques to extract information and improve business workflows. Interpret and communicate results clearly and concisely to audiences with varying backgrounds and degrees of technical understanding. Collaborate with other data scientists, data engineers, and IT team to help ensure project delivery and success. Who is USP Looking For? The successful candidate will have a demonstrated understanding of our mission, commitment to excellence through inclusive and equitable behaviors and practices, ability to quickly build credibility with stakeholders, along with the following competencies and experience: Education: Bachelor’s degree in relevant field (e.g. Engineering, Analytics or Data Science, Computer Science, Statistics) or equivalent experience. Experience: Data Scientist: 3 – 6 years of hands-on experience in data science, advanced analytics, machine learning, statistics, and natural language processing Senior Data Scientist: 6 - 10 years of hands-on experience in data science, advanced analytics, machine learning, statistics, and natural language processing Technical proficiency in the following: python/packages: pandas, numpy, regex, scikit-learn, xgboost, and visualization packages (such as matplotlib, seaborn); SQL Proficiency in CNN/ RNN models. Proficiency in using GenAI concepts with graph data. Experience with data extraction and scraping. Experience with XML documents and DOM model. Additional Desired Preferences Master’s degree (Information Systems Management, Analytics, Data Engineering, Sciences, or other Quantitative program) Experience with scientific chemistry nomenclature or prior work experience in life sciences, chemistry, or hard sciences or degree in sciences Experience with pharmaceutical / IQVIA datasets and nomenclature Experience translating stakeholder needs into technical project outputs Experience working with knowledge graphs in conjunction with RAG patterns and Chunking methodologies. Ability to explain complex technical issues to a non-technical audience Self-directed and able to handle multiple concurrent projects and prioritize tasks independently Able to make tough decisions when trade-offs are required to deliver results Strong communication skills required: Verbal, written, and interpersonal Supervisory Responsibilities This is non-supervisory position Benefits USP provides the benefits to protect yourself and your family today and tomorrow. From company-paid time off and comprehensive healthcare options to retirement savings, you can have peace of mind that your personal and financial well-being is protected. Note: USP does not accept unsolicited resumes from 3rd party recruitment agencies and is not responsible for fees from recruiters or other agencies except under specific written agreement with USP. Who is USP? The U.S. Pharmacopeial Convention (USP) is an independent scientific organization that collaborates with the world's top authorities in health and science to develop quality standards for medicines, dietary supplements, and food ingredients. USP's fundamental belief that Equity = Excellence manifests in our core value of Passion for Quality through our more than 1,300 hard-working professionals across twenty global locations to deliver the mission to strengthen the supply of safe, quality medicines and supplements worldwide. At USP, we value inclusivity for all. We recognize the importance of building an organizational culture with meaningful opportunities for mentorship and professional growth. From the standards we create, the partnerships we build, and the conversations we foster, we affirm the value of Diversity, Equity, Inclusion, and Belonging in building a world where everyone can be confident of quality in health and healthcare. USP is proud to be an equal employment opportunity employer (EEOE) and affirmative action employer. We are committed to creating an inclusive environment in all aspects of our work—an environment where every employee feels fully empowered and valued irrespective of, but not limited to, race, ethnicity, physical and mental abilities, education, religion, gender identity, and expression, life experience, sexual orientation, country of origin, regional differences, work experience, and family status. We are committed to working with and providing reasonable accommodation to individuals with disabilities.
Posted 1 day ago
3.0 - 6.0 years
5 - 15 Lacs
Hyderābād
Remote
Job Title: Data Scientist – Python Experience: 3 to 6 Years Location: Remote Job Type: Full-Time Education: B.E./B.Tech or M.Tech in Computer Science, Data Science, Statistics, or a related field Job Summary We are seeking a talented and results-driven Data Scientist with 3–6 years of experience in Python-based data science workflows. This is a remote, full-time opportunity for professionals who are passionate about solving real-world problems using data and statistical modeling. The ideal candidate should be highly proficient in Python and have hands-on experience with data exploration, machine learning, model deployment, and working with large datasets. Key Responsibilities Analyze large volumes of structured and unstructured data to generate actionable insights Design, develop, and deploy machine learning models using Python and related libraries Collaborate with cross-functional teams including product, engineering, and business to define data-driven solutions Develop data pipelines and ensure data quality, consistency, and reliability Create and maintain documentation for methodologies, code, and processes Communicate findings and model results clearly to technical and non-technical stakeholders Continuously research and implement new tools, techniques, and best practices in data science Required Skills & Qualifications 3–6 years of experience in a data science role using Python Proficiency in Python data science libraries (Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn) Strong statistical analysis and modeling skills Experience with machine learning algorithms (classification, regression, clustering, etc.) Familiarity with model evaluation, tuning, and deployment techniques Hands-on experience with SQL and working with large databases Exposure to cloud platforms (AWS, GCP, or Azure) is a plus Experience with version control (Git), Jupyter notebooks, and collaborative data tools Preferred Qualifications Advanced degree (Master’s preferred) in Computer Science, Data Science, Statistics, or a related discipline Experience with deep learning frameworks like TensorFlow or PyTorch is a plus Familiarity with MLOps tools such as MLflow, Airflow, or Docker Experience in remote team collaboration and agile project environments What We Offer 100% remote work with flexible hours Competitive compensation package Access to cutting-edge tools and real-world projects A collaborative and inclusive work culture Opportunities for continuous learning and professional development Job Type: Full-time Pay: ₹500,000.00 - ₹1,500,000.00 per year Schedule: Day shift Monday to Friday Morning shift Work Location: In person
Posted 1 day ago
0 years
3 - 4 Lacs
India
On-site
Role: Data Science Trainer / Faculty Responsibilities: Conduct training sessions in Python, Machine Learning, Data Analysis , etc. Prepare study materials, assignments, and project guidelines. Guide students on live projects and case studies . Stay updated with latest trends in Data Science and tools. Requirements: Strong command of Python, NumPy, Pandas, Matplotlib, Scikit-learn . Knowledge of ML algorithms , model evaluation, and visualization. Good communication and mentoring skills. Graduation/Post-Graduation in Computer Science/Data Science or related field. Why Join Us? ✅ Friendly and supportive academic environment ✅ Scope for growth and certifications ✅ Opportunity to inspire the next generation of data professionals Job Type: Full-time Pay: ₹30,000.00 - ₹35,000.00 per month Benefits: Paid sick time Work Location: In person
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Data Science Trainer Job Title: Data Science Trainer Organization: SkillCircle Location: Gurugram Employment Type: Full-time / Part Time Job Summary: SkillCircle is looking for a passionate and knowledgeable Data Science Trainer to mentor and guide students through their journey into the world of data science. The ideal candidate should have a strong foundation in Python, machine learning, data analytics, and the ability to simplify complex concepts for learners at various levels. Required Skills: Proficiency in Python, Pandas, NumPy, Scikit-learn, and Matplotlib/Seaborn. Knowledge of Machine Learning algorithms and their real-world applications. Understanding ofstatistics, data preprocessing, and EDA. Excellent communication and presentation skills. Ability to engage and inspire students from non-technical backgrounds. Eligibility Criteria: Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field. 2–5 years of experience in teaching/training or industry experience in data science roles. Prior experience as a trainer or mentor is a plus. Certifications in Data Science or Machine Learning (preferred but not mandatory). Perks and Benefits: Competitive salary up to ₹50,000/month . Flexible working hours Exposure to a large network of learners and tech professionals. Opportunity to work on live projects and build personal brand through content/webinars.
Posted 1 day ago
6.0 - 2.0 years
1 - 2 Lacs
Cannanore
On-site
Job Title: Python & Data Science Trainer Location: Kannur , Kerala Employment Type: Full-Time / Part-Time Experience: 06 to 2years ( Freshers also can apply) Qualification: Any Degree Reporting To: Academic Head / RTH Job Summary: We are seeking a skilled and passionate Python & Data Science Trainer to join our academic team. The trainer will be responsible for delivering high-quality training sessions to students, professionals, and corporate clients in Python programming, data analysis, and machine learning techniques. Key Responsibilities: Design and deliver comprehensive training modules in Python, Data Science, and Machine Learning Conduct classes (online/offline) for students, job seekers, and working professionals Develop curriculum content, assignments, projects, and real-world case studies Prepare assessments, quizzes, and capstone projects to evaluate student understanding Keep course material up to date with the latest industry trends and tools Guide students on projects and support them during practical sessions Assist in the development of internal tools, demos, and data-driven models Take ownership of student performance, engagement, and satisfaction Key Skills & Tools Required: Strong programming knowledge in Python Core Python , OOPs, File Handling, Libraries (NumPy, Pandas, Matplotlib, etc.) Data Analysis , Exploratory Data Analysis (EDA) , Data Cleaning Statistics & Probability for Data Science Machine Learning algorithms : Linear/Logistic Regression, Decision Trees, Random Forest, KNN, SVM, etc. Model evaluation techniques , Cross-validation, Confusion Matrix, etc. Hands-on with Jupyter Notebook , Google Colab , Anaconda Familiarity with SQL , Power BI/Tableau , or Deep Learning is a plus Excellent communication, presentation, and mentoring skills Preferred Certifications: Python Certification (e.g., PCEP, PCAP) Data Science Certification (Coursera, Udemy, IBM, Google, etc.) Machine Learning or AI specialization certificates Salary: As per industry standards Perks: Performance bonuses, training certifications, and growth opportunities Job Types: Full-time, Permanent, Fresher Pay: ₹12,000.00 - ₹20,000.00 per month Benefits: Leave encashment Paid time off Schedule: Day shift Fixed shift Morning shift Supplemental Pay: Yearly bonus Application Deadline: 04/08/2025 Expected Start Date: 04/08/2025
Posted 1 day ago
0 years
4 - 7 Lacs
Gurgaon
On-site
Job Purpose Hands-on data automation engineer with strong Python or Java coding skills and solid SQL expertise, who can work with large data sets, understand stored procedures, and independently write data-driven automation logic. Develop and execute test cases with a focus on Fixed Income trading workflows. The requirement goes beyond automation tools and aligns better with a junior developer or data automation role. Desired Skills and experience Strong programming experience in Python (preferred) or Java. Strong experience of working with Python and its libraries like Pandas, NumPy, etc. Hands-on experience with SQL, including: o Writing and debugging complex queries (joins, aggregations, filtering, etc.) o Understanding stored procedures and using them in automation Experience working with data structures, large tables and datasets Comfort with data manipulation, validation, and building comparison scripts Nice to have: Familiarity with PyCharm, VS Code, or IntelliJ for development and understanding of how automation integrates into CI/CD pipelines Prior exposure to financial data or post-trade systems (a bonus) Excellent communication skills, both written and verbal Experience of working with test management tools (e.g., X-Ray/JIRA). Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the need for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Key Responsibilities Write custom data validation scripts based on provided regression test cases Read, understand, and translate stored procedure logic into test automation Compare datasets across environments and generate diffs Collaborate with team members and follow structured automation practices Contribute to building and maintaining a central automation script repository Establish and implement comprehensive QA strategies and test plans from scratch. Develop and execute test cases with a focus on Fixed Income trading workflows. Driving the creation of regression test suites for critical back-office applications. Collaborate with development, business analysts, and project managers to ensure quality throughout the SDLC. Provide clear and concise reporting on QA progress and metrics to management. Bring strong subject matter expertise in the Financial Services Industry, particularly fixed income trading products and workflows. Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on different environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and managing client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
As a Senior Data and Applied Scientist, you will work with Pattern's Data Science team to curate and analyze data and apply machine learning models and statistical techniques to optimize advertising spend on ecommerce platforms. What You’ll Do Design, build, and maintain machine learning and statistical models to optimize advertising campaigns to improve search visibility and conversion rates on ecommerce platforms. Continuously optimize the quality of our machine learning models, especially for key metrics like search ranking, keyword bidding, CTR and conversion rate estimation Conduct research to integrate new data sources, innovate in feature engineering, fine-tuning algorithms, and enhance data pipelines for robust model performance. Analyze large datasets to extract actionable insights that guide advertising decisions. Work closely with teams across different regions (US and India), ensuring seamless collaboration and knowledge sharing. Dedicate 20% of time to MLOps for efficient, reliable model deployment and operations. What We’re Looking For Bachelor's or Master's in Data Science, Computer Science, Statistics, or a related field. 3-6 years of industry experience in building and deploying machine learning solutions. Strong data manipulation and programming skills in Python and SQL and hands-on experience with libraries such as Pandas, Numpy, Scikit-Learn, XGBoost. Strong problem-solving skills and an ability to analyze complex data. In depth expertise in a range of machine learning and statistical techniques such as linear and tree-based models along with understanding of model evaluation metrics. Experience with Git, AWS, Docker, and MLFlow is advantageous. Additional Pluses Portfolio: An active Kaggle or Github profile showcasing relevant projects. Domain Knowledge: Familiarity with advertising and ecommerce concepts, which would help in tailoring models to business needs. Pattern is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough