Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
14.0 - 24.0 years
40 - 90 Lacs
Bengaluru
Work from Office
We are hiring for Director Data Scientist for one of our leading client - Sigmoid in Bengaluru. Roles and Responsibilities : β’ Convert broad vision and concepts into a structured data science roadmap, and guide a team to successfully execute on it. β’ Handling end-to-end client AI & analytics programs in a fluid environment. Your role will be a combination of hands-on contribution, technical team management, and client interaction. β’ Proven ability to discover solutions hidden in large datasets and to drive business results with their data-based insights β’ Contribute to internal product development initiatives related to data science. β’ Drive excellent project management required to deliver complex projects, including effort/time estimation. β’ Be proactive, with full ownership of the engagement. Build scalable client engagement level processes for faster turnaround & higher accuracy β’ Define Technology/ Strategy and Roadmap for client accounts, and guides implementation of that strategy within projects β’ Manage the team-members, to ensure that the project plan is being adhered to over the course of the project β’ Build a trusted advisor relationship with the IT management at clients and internal accounts leadership. Mandated Skills: β’ A B-Tech/M-Tech/MBA from a top tier Institute preferably in a quantitative subject β’ 14+ years of hands-on experience in applied Machine Learning, AI and analytics β’ Experience of scientific programming in scripting languages like Python, R, SQL, NoSQL, Spark with ML tools & Cloud Technology (AWS, Azure, GCP) β’ Experience in Python libraries such as numpy, pandas, scikit-learn, tensor-flow, scrapy, BERT etc. β’ Strong grasp of depth and breadth of machine learning, deep learning, data mining, and statistical concepts and experience in developing models and solutions in these areas β’ Expertise with client engagement, understanding complex problem statements, and offering solutions in the domains of Supply Chain, Manufacturing, CPG, Marketing etc. Desired Skills: Deep understanding of ML algorithms for common use cases in both structured and unstructured data ecosystems. Comfortable with large scale data processing and distributed computing Providing required inputs to sales, and pre-sales activities A self-starter who can work well with minimal guidance Excellent written and verbal communication skills If interested, kindly share your profile to sukssidha@techtales.in
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Lead Analyst Zupee is Indiaβs fastest growing Technology backed Behavioral Science company. We are innovating Skill-Based Gaming with a mission to become the most trusted and responsible entertainment company in the world. We have been constantly focusing on innovation of indigenous games to entertain the mass Our strategy is to invest in our people & user experience to drive profitable growth and become the market leader in our space. We have been experiencing phenomenal growth since inception and running profitable at EBT level since Q3, 2020. We have closed Series B funding at $102 million, at a valuation of $600 million. The company also announced a partnership with Reliance Jio Platforms, post which Zupee games will distribute its content across all customers using Jio phones. The partnership now gives Zupee the biggest reach of all gaming companies in India, transforming it from a fast-growing startup to a firm contender for the biggest gaming studio in India. ABOUT THE JOB You will be responsible for taking care of: β all sorts of P&L & product related ad-hoc analysis and investigations, β hypothesis generation and validation for any movement in the P&L metric β driving end to end analytics and insights to help team take data-driven decisions β collaborate across functions like product, marketing, design, growth, strategy, customer relations and technology What are we looking for? β Must have SQL coding skills and Advanced SQL knowledge β Must have Data visualization / ETL tool β Tableau / Power BI / SSIS / BODS β Must have expertise in MS Excel - VLookup, HLookup, Pivots, Solver and Data Analysis β Must have experience in Statistics and analytical capabilities β Good to have Python (pandas, NumPy) or R β Good to have Machine Learning knowledge & Predictive Modelling β Good to have AWS Qualifications and Skills β Bachelors or Masters in Technology or any other related field β Minimum 4-6 years of experience in the analytics role Show more Show less
Posted 2 weeks ago
4.0 - 9.0 years
9 - 16 Lacs
Bangalore Rural, Bengaluru
Hybrid
Required Skill Set: Core Python Development: Strong hands-on experience in core Python application development. Not looking for ML/Data Science background focus is on back-end/application logic. Object-Oriented Programming (OOP): Proficient understanding and practical application of OOPS concepts in Python. Service-Oriented Architecture (SOA): Experience in building and consuming services following SOA principles. REST API Development with Flask: Strong knowledge of building and maintaining RESTful APIs using Flask framework. Familiarity with request handling, routing, serialization, and authentication mechanisms. Nice to Have (Optional): Experience with microservices architecture. Familiarity with Docker or Kubernetes for containerization and deployment. Exposure to CI/CD pipelines. Knowledge of SQL/NoSQL databases.
Posted 2 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: β’ Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. β’ Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. β’ Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. β’ Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. β’ Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . β’ Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . β’ End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. β’ Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. β’ NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. β’ Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . β’ Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. β’ Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . β’ Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, Data Science, or a related field. β’ 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. β’ Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . β’ Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . β’ Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. β’ Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. β’ Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . β’ Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. β’ Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. β’ Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Title : Senior Data Scientist (SDS 2) Experience: 4+ years Location : Bengaluru (Hybrid) Company Overview: Akaike Technologies is a dynamic and innovative AI-driven company dedicated to building impactful solutions across various domains . Our mission is to empower businesses by harnessing the power of data and AI to drive growth, efficiency, and value. We foster a culture of collaboration , creativity, and continuous learning , where every team member is encouraged to take initiative and contribute to groundbreaking projects. We value diversity, integrity, and a strong commitment to excellence in all our endeavors. Job Description: We are seeking an experienced and highly skilled Senior Data Scientist to join our team in Bengaluru. This role focuses on driving innovative solutions using cutting-edge Classical Machine Learning, Deep Learning, and Generative AI . The ideal candidate will possess a blend of deep technical expertise , strong business acumen, effective communication skills , and a sense of ownership . During the interview, we look for a proven track record in designing, developing, and deploying scalable ML/DL solutions in a fast-paced, collaborative environment. Key Responsibilities: ML/DL Solution Development & Deployment: Design, implement, and deploy end-to-end ML/DL, GenAI solutions, writing modular, scalable, and production-ready code. Develop and implement scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Design and implement custom models and loss functions to address data nuances and specific labeling challenges. Ability to model in different marketing scenarios of a product life cycle ( Targeting, Segmenting, Messaging, Content Recommendation, Budget optimisation, Customer scoring, risk and churn ), and data limitations(Sparse or incomplete labels, Single class learning) Large-Scale Data Handling & Processing: Efficiently handle and model billions of data points using multi-cluster data processing frameworks (e.g., Spark SQL, PySpark ). Generative AI & Large Language Models (LLMs): Leverage in-depth understanding of transformer architectures and the principles of Large and Small Language Models . Practical experience in building LLM-ready Data Management layers for large-scale structured and unstructured data . Apply foundational understanding of LLM Agents, multi-agent systems (e.g., Agent-Critique, ReACT, Agent Collaboration), advanced prompting techniques, LLM eval uation methodologies, confidence grading, and Human-in-the-Loop systems. Experimentation, Analysis & System Design: Design and conduct experiments to test hypotheses and perform Exploratory Data Analysis (EDA) aligned with business requirements. Apply system design concepts and engineering principles to create low-latency solutions capable of serving simultaneous users in real-time. Collaboration, Communication & Mentorship: Create clear solution outlines and e ffectively communicate complex technical concepts to stakeholders and team members. Mentor junior team members, providing guidance and bridging the gap between business problems and data science solutions. Work closely with cross-functional teams and clients to deliver impactful solutions. Prototyping & Impact Measurement: Comfortable with rapid prototyping and meeting high productivity expectations in a fast-paced development environment. Set up measurement pipelines to study the impact of solutions in different market scenarios. Must-Have Skills: Core Machine Learning & Deep Learning: In-depth knowledge of Artificial Neural Networks (ANN), 1D, 2D, and 3D Convolutional Neural Networks (ConvNets), LSTMs , and Transformer models. Expertise in modeling techniques such as promo mix modeling (MMM) , PU Learning , Customer Lifetime Value (CLV) , multi-dimensional time series modeling, and demand forecasting in supply chain and simulation. Strong proficiency in PU learning, single-class learning, representation learning, alongside traditional machine learning approaches. Advanced understanding and application of model explainability techniques. Data Analysis & Processing: Proficiency in Python and its data science ecosystem, including libraries like NumPy, Pandas, Dask, and PySpark for large-scale data processing and analysis. Ability to perform effective feature engineering by understanding business objectives. ML/DL Frameworks & Tools: Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow/Keras, and PyTorch for developing and deploying models. Natural Language Processing (NLP): Expertise in traditional and advanced NLP techniques, including Transformers (BERT, T5, GPT), Word2Vec, Named Entity Recognition (NER), topic modeling, and contrastive learning. Cloud & MLOps: Experience with the AWS ML stack or equivalent cloud platforms. Proficiency in developing scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Problem Solving & Research: Strong logical and reasoning skills. Good understanding of the Python Ecosystem and experience implementing research papers. Collaboration & Prototyping: Ability to thrive in a fast-paced development and rapid prototyping environment. Relevant to Have: Expertise in Claims data and a background in the pharmaceutical industry . Awareness of best software design practices . Understanding of backend frameworks like Flask. Knowledge of Recommender Systems, Representative learning, PU learning. Benefits and Perks: Competitive ESOP grants. Opportunity to work with Fortune 500 companies and world-class teams. Support for publishing papers and attending academic/industry conferences. Access to networking events, conferences, and seminars. Visibility across all functions at Akaike, including sales, pre-sales, lead generation, marketing, and hiring. Appendix Technical Skills (Must Haves) Having deep understanding of the following Data Processing : Wrangling : Some understanding of querying database (MySQL, PostgresDB etc), very fluent in the usage of the following libraries Pandas, Numpy, Statsmodels etc. Visualization : Exposure towards Matplotlib, Plotly, Altair etc. Machine Learning Exposure : Machine Learning Fundamentals, For ex: PCA, Correlations, Statistical Tests etc. Time Series Models, For ex: ARIMA, Prophet etc. Tree Based Models, For ex: Random Forest, XGBoost etc.. Deep Learning Models, For ex: Understanding and Experience of ConvNets, ResNets, UNets etc. GenAI Based Models : Experience utilizing large-scale language models such as GPT-4 or other open-source alternatives (such as Mistral, Llama, Claude) through prompt engineering and custom finetuning. Code Versioning Systems : Github, Git If you're interested in the job opening, please apply through the Keka link provided here: https://akaike.keka.com/careers/jobdetails/26215 Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Overview: The Associate Data Scientist supports the development and implementation of data models, focusing on Machine Learning, under the supervision of more experienced scientists, contributing to the teamβs innovative projects. Job Description: Assist in the development of Machine Learning models and algorithms, contributing to the design and implementation of data-driven solutions. Perform data preprocessing, cleaning, and analysis, preparing datasets for modeling and supporting higher-level data science initiatives. Learn from and contribute to projects involving Deep Learning and General AI, gaining hands-on experience under the guidance of senior data scientists. Engage in continuous professional development, enhancing skills in Python, Machine Learning, and related areas through training and practical experience. Collaborate with team members to ensure the effective implementation of data science solutions, participating in brainstorming sessions and project discussions. Support the documentation of methodologies and results, ensuring transparency and reproducibility of data science processes. Qualifications: Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field, with a strong interest in Machine Learning, Deep Learning, and AI. Experience in a data science role, demonstrating practical experience and strong Python programming skills. Exposure to Business Intelligence (BI) & Data Engineering concepts and tools. Familiarity with data platforms such as Dataiku is a bonus. Skills: Solid understanding of Machine Learning principles and practical experience in Python programming. Familiarity with data science and machine learning libraries in Python (e.g., scikit-learn, pandas, NumPy). Eagerness to learn Deep Learning and General AI technologies, with a proactive approach to acquiring new knowledge and skills. Strong analytical and problem-solving abilities, capable of tackling data-related challenges and deriving meaningful insights. Basic industry domain knowledge, with a willingness to deepen expertise and apply data science principles to solve real-world problems. Effective communication skills, with the ability to work collaboratively in a team environment and contribute to discussions. v4c.ai is an equal opportunity employer. We value diversity and are committed to creating an inclusive environment for all employees, regardless of race, color, religion, gender, sexual orientation, national origin, age, disability, or veteran status. We believe in the power of diversity and strive to foster a culture where every team member feels valued and respected. We encourage applications from individuals of all backgrounds and experiences. If you are passionate about diversity and innovation and thrive in a collaborative environment, we invite you to apply and join our team. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Overview: The Data Scientist supports the development and implementation of data models, focusing on Machine Learning, under the supervision of more experienced scientists, contributing to the teamβs innovative projects. Job Description: Assist in the development of Machine Learning models and algorithms, contributing to the design and implementation of data-driven solutions. Perform data preprocessing, cleaning, and analysis, preparing datasets for modeling and supporting higher-level data science initiatives. Learn from and contribute to projects involving Deep Learning and General AI, gaining hands-on experience under the guidance of senior data scientists. Engage in continuous professional development, enhancing skills in Python, Machine Learning, and related areas through training and practical experience. Collaborate with team members to ensure the effective implementation of data science solutions, participating in brainstorming sessions and project discussions. Support the documentation of methodologies and results, ensuring transparency and reproducibility of data science processes. Qualifications: Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field, with a strong interest in Machine Learning, Deep Learning, and AI. Experience in a data science role, demonstrating practical experience and strong Python programming skills. Exposure to Business Intelligence (BI) & Data Engineering concepts and tools. Familiarity with data platforms such as Dataiku is a bonus. Skills: Solid understanding of Machine Learning principles and practical experience in Python programming. Familiarity with data science and machine learning libraries in Python (e.g., scikit-learn, pandas, NumPy). Eagerness to learn Deep Learning and General AI technologies, with a proactive approach to acquiring new knowledge and skills. Strong analytical and problem-solving abilities, capable of tackling data-related challenges and deriving meaningful insights. Basic industry domain knowledge, with a willingness to deepen expertise and apply data science principles to solve real-world problems. Effective communication skills, with the ability to work collaboratively in a team environment and contribute to discussions. v4c.ai is an equal opportunity employer. We value diversity and are committed to creating an inclusive environment for all employees, regardless of race, color, religion, gender, sexual orientation, national origin, age, disability, or veteran status. We believe in the power of diversity and strive to foster a culture where every team member feels valued and respected. We encourage applications from individuals of all backgrounds and experiences. If you are passionate about diversity and innovation and thrive in a collaborative environment, we invite you to apply and join our team. Show more Show less
Posted 2 weeks ago
0.0 years
0 Lacs
Chennai District, Tamil Nadu
On-site
Overview We are seeking a skilled Python Developer to join our dynamic team. The ideal candidate will have a strong background in Python development and a passion for creating efficient and scalable software solutions. Job description Knowledge of Python syntax, data types, and control flow. Experience with loops, conditionals, and functions. Familiar with Python libraries like math, datetime, and random. Understanding of lists, tuples, dictionaries, sets, and working with collections. Proficiency in algorithms for sorting, searching, and basic problem-solving. Familiar with time and space complexity (Big-O notation). Experience with web frameworks like Flask or Django. Knowledge of RESTful APIs and integrating with front-end technologies. Working with databases (SQL or NoSQL) using Python. Proficiency in libraries like Pandas, NumPy, and Matplotlib. Knowledge of machine learning frameworks such as Scikit-learn or TensorFlow. Familiarity with data preprocessing, feature engineering, and model evaluation. Writing scripts for automating tasks (e.g., web scraping with BeautifulSoup or Selenium). Experience with regular expressions and file handling. Familiar with Git for version control and collaborating with teams. Experience writing unit tests with unittest or pytest. Familiar with debugging tools and techniques in Python. Job Summary We are seeking a skilled Python Developer to join our dynamic team. The ideal candidate will be responsible for developing and maintaining high-quality software solutions using Python programming language. Responsibilities Develop Python-based software applications Collaborate with the IT infrastructure team to integrate user-facing elements with server-side logic Write effective, scalable code Implement security and data protection measures Test and debug programs Manage code repositories on GitHub Participate in software design meetings and analyze user needs * "We're hiring across Tamil Nadu. only" WP:+91 95854 01234 Job Type: Full-time Pay: βΉ61.58 - βΉ65.44 per hour Schedule: Day shift Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Must Have skills : Deep understanding of Linux, networking fundamentals and security Experience working with AWS cloud platform and infrastructure services like EC2, S3, VPC, Subnet, ELB, LoadBalnacer, RDS, Route 53 etc.) Experience working with infrastructure as code with Terraform or Ansible tools Experience in building, deploying, and monitoring distributed apps using container systems (Docker) and container orchestration (Kubernetes, EKS) Kubernetes Administration: Cluster Setup and Management, Cluster Configuration and Networking, Upgrades, Monitoring and Logging, Security and Compliance, App deployement etc. Experience in Automation and CI/CD Integration,Capacity Planning, Pod Scheduling, Resource Quotas etc. Experience at OS level upgrades and Patching, including vulnerability remediations Ability to read and understand code (Java / Python / R / Scala) Qualifications Nice to have skills: Experience in SAS Viya administration Experience managing large Big Data clusters Experience in Big Data tools like Hue, Hive, Spark, Jupyter, SAS and R-Studio Professional coding experience in at least one programming language, preferably Python. Knowledge in analytical libraries like Pandas, Numpy, Scipy, PyTorch etc. Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are key differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experianβs strong people first approach is award winning; Great Place To Workβ’ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here Show more Show less
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Pitampura, Delhi, Delhi
On-site
Job Title: Data Analyst (Python & Web Scraping Expert) Location : Netaji Subhash Place, Pitampura, New Delhi Department : Data Analytics / Share Recovery Job Overview: We are seeking a detail-oriented and results-driven Data Analyst to join our team. The ideal candidate will have expertise in Python programming, web scraping, and data analysis, with a focus on IEPF share recovery . The role involves collecting, processing, and analyzing data from multiple online sources, providing actionable insights to support business decision-making. Key Responsibilities: Data Scraping : Use Python and web scraping techniques to gather data from financial, regulatory, and shareholding-related websites for IEPF (Investor Education and Protection Fund) share recovery. Data Cleaning & Preprocessing : Clean, process, and structure raw data for analysis. Ensure data quality and integrity by identifying and correcting errors in datasets. Data Analysis & Visualization : Analyze large datasets to extract actionable insights regarding share recovery and trends in investor shareholding. Present findings through visualizations (e.g., graphs, dashboards). Reporting : Prepare and present detailed reports on share recovery patterns, trends, and forecasts based on analysis. Present findings to the management team to help drive business decisions. Automation & Optimization : Build and maintain automated web scraping systems to regularly fetch updated shareholding data, optimizing the data pipeline for efficiency. Collaboration : Work closely with business stakeholders to understand data requirements and deliver reports or visualizations tailored to specific needs related to IEPF share recovery. Required Skills & Qualifications: Technical Skills : Strong proficiency in Python for data analysis and automation. Expertise in web scraping using libraries such as BeautifulSoup , Selenium , and Scrapy . Experience with data manipulation and analysis using Pandas , NumPy , and other relevant libraries. Familiarity with SQL for data extraction and querying relational databases. Knowledge of data visualization tools like Matplotlib , Seaborn , or Tableau for presenting insights in an easy-to-understand format. Experience : Minimum of 2-3 years of experience as a Data Analyst or in a similar role, with a focus on Python programming and web scraping. Experience working with financial or investment data, particularly in areas such as IEPF , share recovery , or investor relations . Strong problem-solving skills with the ability to analyze complex datasets and generate actionable insights. Additional Skills : Strong attention to detail and ability to work with large datasets. Ability to work in a collaborative team environment. Familiarity with cloud platforms (e.g., AWS, Google Cloud) and data storage (e.g., databases, cloud data lakes) is a plus. Education : Bachelorβs or Masterβs degree in Data Science , Computer Science , Statistics , Finance , or a related field. Soft Skills : Strong communication skills, with the ability to explain technical concepts to non-technical stakeholders. Ability to prioritize tasks and manage multiple projects simultaneously. Strong organizational skills and time management. Preferred Skills: Experience working in the financial industry or understanding of regulatory frameworks (e.g., IEPF regulations and procedures). Familiarity with machine learning models and predictive analytics for forecasting share recovery trends. Ability to automate workflows and optimize existing data collection pipelines. Job Requirements: Comfortable working in a fast-paced environment. Ability to think critically and provide insights that drive strategic decisions. Must be self-motivated and capable of working independently with minimal supervision. Willingness to stay updated with the latest data analysis techniques and web scraping technologies. Job Type: Full-time Pay: βΉ20,000.00 - βΉ32,000.00 per month Schedule: Day shift Education: Bachelor's (Preferred) Experience: total work: 1 year (Required) Work Location: In person
Posted 2 weeks ago
14.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Sigmoid enables business transformation using data and analytics, leveraging real-time insights to make accurate and fast business decisions, by building modern data architectures using cloud and open source. Some of the worldβs largest data producers engage with Sigmoid to solve complex business problems. Sigmoid brings deep expertise in data engineering, predictive analytics, artificial intelligence, and DataOps. Sigmoid has been recognized as one of the fastest growing technology companies in North America, 2021, by Financial Times, Inc. 5000, and Deloitte Technology Fast 500. Offices: New York | Dallas | San Francisco | Lima | Bengaluru This role is for our Bengaluru office. Why Join Sigmoid? β’ Sigmoid provides the opportunity to push the boundaries of what is possible by seamlessly combining technical expertise and creativity to tackle intrinsically complex business problems and convert them into straight-forward data solutions. β’ Despite being continuously challenged, you are not alone. You will be part of a fast-paced diverse environment as a member of a high-performing team that works together to energize and inspire each other by challenging the status quo β’ Vibrant inclusive culture of mutual respect and fun through both work and play Roles and Responsibilities: β’ Convert broad vision and concepts into a structured data science roadmap, and guide a team to successfully execute on it. β’ Handling end-to-end client AI & analytics programs in a fluid environment. Your role will be a combination of hands-on contribution, technical team management, and client interaction. β’ Proven ability to discover solutions hidden in large datasets and to drive business results with their data-based insights β’ Contribute to internal product development initiatives related to data science. β’ Drive excellent project management required to deliver complex projects, including effort/time estimation. β’ Be proactive, with full ownership of the engagement. Build scalable client engagement level processes for faster turnaround & higher accuracy β’ Define Technology/ Strategy and Roadmap for client accounts, and guides implementation of that strategy within projects β’ Manage the team-members, to ensure that the project plan is being adhered to over the course of the project β’ Build a trusted advisor relationship with the IT management at clients and internal accounts leadership. Mandated Skills: β’ A B-Tech/M-Tech/MBA from a top tier Institute preferably in a quantitative subject β’ 14+ years of hands-on experience in applied Machine Learning, AI and analytics β’ Experience of scientific programming in scripting languages like Python, R, SQL, NoSQL, Spark with ML tools & Cloud Technology (AWS, Azure, GCP) β’ Experience in Python libraries such as numpy, pandas, scikit-learn, tensor-flow, scrapy, BERT etc. β’ Strong grasp of depth and breadth of machine learning, deep learning, data mining, and statistical concepts and experience in developing models and solutions in these areas β’ Expertise with client engagement, understanding complex problem statements, and offering solutions in the domains of Supply Chain, Manufacturing, CPG, Marketing etc. Desired Skills: β Deep understanding of ML algorithms for common use cases in both structured and unstructured data ecosystems. β Comfortable with large scale data processing and distributed computing β Providing required inputs to sales, and pre-sales activities β A self-starter who can work well with minimal guidance β Excellent written and verbal communication skills Show more Show less
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Mohali, Punjab
On-site
Job Title: Python Developer Location: Mohali, Punjab Experience Required: 2β3 Years About the Role We are seeking an experienced and motivated Python Developer to join our team in Mohali, Punjab. In this role, you will be responsible for designing, developing, and maintaining Python-based applications that support data-driven decision-making. You will work closely with senior developers and cross-functional teams to build robust, scalable solutions that meet business needs. Key Responsibilities Design, develop, and maintain Python applications for data processing and backend systems. Collaborate with senior developers to understand requirements and translate them into technical solutions. Write clean, efficient, and maintainable code, adhering to coding standards and best practices. Conduct thorough testing and debugging to ensure high-quality software performance. Prepare and maintain technical documentation for developed applications and systems. Requirements Bachelorβs degree in Computer Science, Software Engineering, or a related field. 2β3 years of professional experience in Python development. Strong knowledge of Python and experience with libraries such as NumPy, Pandas, and Flask. Proficiency in writing and optimizing SQL queries. Solid understanding of software development principles and best practices. Strong analytical and problem-solving skills. Effective communication and collaboration skills. Good to Have Experience with cloud platforms like AWS, Azure, or Google Cloud Platform (GCP). Familiarity with big data technologies such as Hadoop and Spark. Understanding of machine learning techniques and exposure to large language models (LLMs). About Iota Analytics Iota Analytics is a legal data science firm offering innovative products and services across litigations and investigations, intellectual property, and tech product development. Our Tech & Data Science Labs empower legal teams to drive growth and velocity through: βͺ NoetherIP β an expert + AI-curated intellectual property analytics engine to support better R&D, patent prosecution, and protection decisions βͺ Centers of Excellence (COEs) for: eDiscovery, Document Review, and Cyber Incident Response Intellectual Asset Management, including technology intelligence, open innovation, reverse engineering, patent portfolio management, prior art searches, and office action response Custom product & data science development Iotaβs COEs are uniquely positioned to operate across platforms, leveraging advanced tools and data science to deliver high-velocity outcomes while minimizing risk, time, and cost. All client engagements are handled by carefully selected teams and supported by ISO 27001 and GDPR-compliant infrastructure and processes, ensuring data protection and confidentiality. Headquartered in London, Iota Analytics has growing teams and labs across India. Why Join Iota Analytics? β ISO-certified & officially recognized as a Great Place to Work β Competitive compensation with ample learning opportunities β 5-Day Work Week (Monday to Friday) β Inclusive and growth-focused work environment β Comprehensive Benefits Package, including: Employer-paid medical insurance for self, spouse, and two children Personal accident and term life insurance Generous paid vacation, public holidays, and sick leave Parental Leave for new parents Employee Assistance Program (EAP) offering confidential support services Retirement benefits including Provident Fund and Gratuity Be a part of a forward-thinking company where your leadership makes an impact. Apply today to join the Iota Analytics team! Job Types: Full-time, Permanent Pay: βΉ30,000.00 - βΉ50,326.54 per month Benefits: Health insurance Paid sick time Provident Fund Schedule: Day shift Monday to Friday Morning shift Application Question(s): Total experience in Python? Do you have experience with NumPy, Pandas and Flask? Current location and Are you Comfortable with Mohali location? Current Salary? Expected Salary? Notice Period? Work Location: In person
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Mandatory skills (8+ Years of experience in ETL development with 4+ Years on AWS Pyspark scripting) 1. Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions. 2. Person should be strong in Pyspark 3. Hands on and working knowledge in Python packages like NumPy, Pandas, Etc 4. Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions. Sound knowledge in AWS services is must. 5. Person should work as Individual contributor 6. Good to have familiar with metadata management, data lineage, and principles of data governance. Good to have: 1. Experience to process large set of data transformations both semi and structured data 2. Experience to build data lake & configuration on delta tables. 3. Good experience with computing & cost optimization. 4. Understanding the environment and use case and ready to build holistic Data Integration frame works. 5. Good experience in MWAA (airflow orchestration) Soft skill: 1. Having good communication to interact with IT-Stake holders and Business. 2. Understand the pain point to deliver Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Overview At Motorola Solutions, we believe that everything starts with our people. Weβre a global close-knit community, united by the relentless pursuit to help keep people safer everywhere. Our critical communications, video security and command center technologies support public safety agencies and enterprises alike, enabling the coordination thatβs critical for safer communities, safer schools, safer hospitals and safer businesses. Connect with a career that matters, and help us build a safer future. Department Overview The Cloud Platform Engineering team is responsible for the development and operations of critical cloud infrastructure, reliability, security and Business operational services, in support of Motorola Solutions' public and hybrid cloud-based Software as a Service (SaaS) solutions for public safety customers. This team is part of Motorola Solutionsβ Software Enterprise division, which offers secure, reliable and efficient team communications, workflow and operational intelligence solutions for mission critical public safety and enterprise markets throughout the world. Our services leverage Cloud Computing infrastructure on Azure, AWS and GCP to build at scale. Job Description Develop and maintain ETL pipelines using Python, NumPy, Pandas, PySpark, and Apache Airflow. Design and implement ETL solutions for reporting purposes Server-side development skills like multithreading, asynchronous IO, databases Knowledge of Azure DevOps and Github Work with large-scale data processing and transformation workflows. Optimize and enhance ETL performance and scalability. Collaborate with data engineers and business teams to ensure efficient data flow. Troubleshoot and debug ETL-related issues to ensure data integrity and reliability. As a software engineer on this team, you will be a key contributor to platform development activities. Our teams are developing services, tools, and processes to support other Motorola Solutions' engineering teams as well as deliver solutions to our customers. You will be working on a high-velocity, results-oriented team that leverages cutting-edge technologies and techniques. The right individual will be motivated and will have a passion for automation, deployment processes and enabling innovation. Your efforts will help to shape the engineering culture and best practices across Motorola Solutionsβ Software Enterprise organization. Basic Requirements 3+ years of Python experience, with 2+ years dedicated to Python ETL development. Proficiency in PySpark, Apache Airflow, NumPy, and Pandas. Experience in working with SQL and Bigquery Strong problem-solving skills and the ability to work independently. Experience in cloud-based ETL solutions (AWS, GCP, Azure). Knowledge of big data technologies like Hadoop, Spark, or Kafka. Travel Requirements None Relocation Provided None Position Type Experienced Referral Payment Plan Yes EEO Statement Motorola Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion or belief, sex, sexual orientation, gender identity, national origin, disability, veteran status or any other legally-protected characteristic. We are proud of our people-first and community-focused culture, empowering every Motorolan to be their most authentic self and to do their best work to deliver on the promise of a safer world. If youβd like to join our team but feel that you donβt quite meet all of the preferred skills, weβd still love to hear why you think youβd be a great addition to our team. Show more Show less
Posted 2 weeks ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Overview At Motorola Solutions, we believe that everything starts with our people. Weβre a global close-knit community, united by the relentless pursuit to help keep people safer everywhere. Our critical communications, video security and command center technologies support public safety agencies and enterprises alike, enabling the coordination thatβs critical for safer communities, safer schools, safer hospitals and safer businesses. Connect with a career that matters, and help us build a safer future. Department Overview The Cloud Platform Engineering team is responsible for the development and operations of critical cloud infrastructure, reliability, security and Business operational services, in support of Motorola Solutions' public and hybrid cloud-based Software as a Service (SaaS) solutions for public safety customers. This team is part of Motorola Solutionsβ Software Enterprise division, which offers secure, reliable and efficient team communications, workflow and operational intelligence solutions for mission critical public safety and enterprise markets throughout the world. Our services leverage Cloud Computing infrastructure on Azure, AWS and GCP to build at scale. Job Description Develop and maintain ETL pipelines using Python, NumPy, Pandas, PySpark, and Apache Airflow. Design and implement ETL solutions for reporting purposes Server-side development skills like multithreading, asynchronous IO, databases Knowledge of Azure DevOps and Github Work with large-scale data processing and transformation workflows. Optimize and enhance ETL performance and scalability. Collaborate with data engineers and business teams to ensure efficient data flow. Troubleshoot and debug ETL-related issues to ensure data integrity and reliability. As a software engineer on this team, you will be a key contributor to platform development activities. Our teams are developing services, tools, and processes to support other Motorola Solutions' engineering teams as well as deliver solutions to our customers. You will be working on a high-velocity, results-oriented team that leverages cutting-edge technologies and techniques. The right individual will be motivated and will have a passion for automation, deployment processes and enabling innovation. Your efforts will help to shape the engineering culture and best practices across Motorola Solutionsβ Software Enterprise organization. Basic Requirements 0-2 years of Python experience, with good understanding of Python ETL development. Proficiency in PySpark, Apache Airflow, NumPy, and Pandas. Proficiency in working with SQL and Bigquery Strong problem-solving skills and the ability to work independently. Proficiency in cloud-based ETL solutions (AWS, GCP, Azure). Knowledge of big data technologies like Hadoop, Spark, or Kafka. Travel Requirements None Relocation Provided None Position Type Experienced Referral Payment Plan Yes EEO Statement Motorola Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion or belief, sex, sexual orientation, gender identity, national origin, disability, veteran status or any other legally-protected characteristic. We are proud of our people-first and community-focused culture, empowering every Motorolan to be their most authentic self and to do their best work to deliver on the promise of a safer world. If youβd like to join our team but feel that you donβt quite meet all of the preferred skills, weβd still love to hear why you think youβd be a great addition to our team. Show more Show less
Posted 2 weeks ago
50.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Opportunity Job Type: Permanent Application Deadline: 16 June 2025 Job Description Title Test Analyst Department ISS DELIVERY - DEVELOPMENT - GURGAON Location GGN Level 2 Weβre proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our ISS Delivery team and feel like youβre part of something bigger. About Your Team The Investment Solutions Services (ISS) delivery team provides systems development, implementation and support services for FILβs global Investment Management businesses across asset management lifecyle. We support Fund Managers, Research Analysts, Traders and Investment Services Operations in all of FILβs international locations, including London, Hong Kong, and Tokyo About Your Role You will be joining this position as Test Analyst in QE chapter, and therefore be responsible for executing testing activities for all applications under Investment Risk & Attribution team based out of India. Here are the expectations and probably how your day in a job will look like Understand business needs and analyse requirements and user stories to carry out different testing activities. Collaborate with developers and BAβs to understand new features, bug fixes, and changes in the codebase. Create and execute functional as well as automated test cases on different test environments to validate the functionality Log defects in defect tracker and work with PMβs and devs to prioritise and resolve them. Develop and maintain automation script, preferably using python stack. Intermediate level understanding of relational database. Document test cases , results and any other issues encountered during testing. Attend team meetings and stand ups to discuss progress, risks and any issues that affects project deliveries Stay updated with new tools, techniques and industry trends. About You Seasoned Software Test analyst with more than 2+ years of hands on experience Hands-on experience in automating web and backend automation using open source tools (Playwright, pytest, request, numpy , pandas). Proficiency in writing and understanding db queries in various databases ( Oracle, AWS RDS) Good understanding of cloud ( AWS , Azure) Preferable to have finance investment domain. Strong logical reasoning and problem solving skills. Preferred programming language Python and Java. Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI) for automating deployment and testing workflows Feel rewarded For starters, weβll offer you a comprehensive benefits package. Weβll value your wellbeing and support your development. And weβll be as flexible as we can about where and when you work β finding a balance that works for all of us. Itβs all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. Show more Show less
Posted 2 weeks ago
1.5 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Company: Whatmaction Location: Ahmedabad (On-site) About the Opportunity: Are you a fresh graduate ready to take your skills to the next level with real AI and ML work β not just tutorials? Whatmaction is offering an exclusive opportunity to join us as an AI/ML Solution Engineer Intern , where you will be part of an elite training and development program designed to shape you into a high-performing tech professional. If youβre ready to build AI models from scratch and take on challenges like a tech soldier β this is for you. What Youβll Experience: 8 Months of Intense Training Think of it as a military-grade bootcamp for your brain β hands-on learning, projects, deadlines, and AI model development. Real-World AI/ML Projects Work closely with senior engineers to build production-grade AI/ML solutions. Deep Tech Exposure: Resume parsers, NLP applications, AI chatbots, ML-based automation systems, and more. Post-Training Commitment: 1.5-year bond after training β to work with us on exciting projects and gain incredible industry experience. Key Skills & Technologies: Languages: Python (strong foundation required) Libraries & Tools: SpaCy, scikit-learn, TensorFlow/PyTorch (preferred), Pandas, NumPy Core Concepts: Machine Learning Algorithms Natural Language Processing (NLP) Data Preprocessing & Feature Engineering Model Evaluation & Tuning Bonus Skills (Good to Have): Exposure to OpenAI APIs REST APIs and backend basics Git & Version Control Who Should Apply? Fresh graduates (2023β2025 pass-outs) from B.Tech/MCA/BCA or similar backgrounds. You have strong problem-solving skills and a passion for learning. You are committed to completing an intensive training and the post-training bond. Youβre not just looking for a job β you want to build things with impact. Are You Ready for the Challenge? This is not a regular internship. Itβs a transformative opportunity to build a real AI career from the ground up. π© Send your resume to: hr@whatmaction.com π Location: Ahmedabad (Office-based only) Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description: Quant Developer Job Summary As a Quant Developer at OptimusPrime Research, you will work closely with quantitative researchers and traders to design, implement, and optimize high-performance trading algorithms and analytical tools. You will play a key role in bridging the gap between research and production, ensuring that our strategies are robust, scalable, and efficient. Key Responsibilities β’ Collaborate with quantitative researchers to implement and optimize trading strategies and models. β’ Develop and maintain high-performance, low-latency trading systems and infrastructure. β’ Design and implement tools for data analysis, back testing, and simulation of trading strategies. β’ Work with large datasets to build and improve data pipelines for research and production. β’ Ensure code quality, reliability, and scalability through rigorous testing and code reviews. β’ Stay up-to-date with the latest technologies and methodologies in quantitative finance and software development. β’ Troubleshoot and resolve issues in real-time trading environments. Qualifications o Bachelorβs, Masterβs, or PhD in Computer Science, Mathematics, Physics, Engineering, or a related field. o 2+ years of experience in software development o Strong programming skills in C++. Experience with Python libraries (e.g., NumPy, pandas, scikit-learn) is a plus. o Experience with high-performance computing, parallel processing, and low-latency systems. o Familiarity with financial markets, trading concepts, and quantitative finance. o Strong problem-solving and analytical skills. o Excellent communication and collaboration abilities. o Ability to work in a fast-paced, dynamic environment. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JOB DESCRIPTION β’ Strong in Python with libraries such as polars, pandas, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers β’ Must have: Deep understanding of modern recommendation systems including two-tower , multi-tower , and cross-encoder architectures β’ Must have: Hands-on experience with deep learning for recommender systems using TensorFlow , Keras , or PyTorch β’ Must have: Experience generating and using text and image embeddings (e.g., CLIP , ViT , BERT , Sentence Transformers ) for content-based recommendations β’ Must have: Experience with semantic similarity search and vector retrieval for matching user-item representations β’ Must have: Proficiency in building embedding-based retrieval models , ANN search , and re-ranking strategies β’ Must have: Strong understanding of user modeling , item representations , temporal/contextual personalization β’ Must have: Experience with Vertex AI for training, tuning, deployment, and pipeline orchestration β’ Must have: Experience designing and deploying machine learning pipelines on Kubernetes (e.g., using Kubeflow Pipelines , Kubeflow on GKE , or custom Kubernetes orchestration ) β’ Should have experience with Vertex AI Matching Engine or deploying Qdrant , F AISS , ScaNN , on GCP for large-scale retrieval β’ Should have experience working with Dataproc (Spark/PySpark) for feature extraction, large-scale data prep, and batch scoring β’ Should have a strong grasp of cold-start problem solving using metadata and multi-modal embeddings β’ Good to have: Familiarity with Multi-Modal Retrieval Models combining text, image, and tabular features β’ Good to have: Experience building ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate re-ranking β’ Must have: Knowledge of recommender metrics (Recall@K, nDCG, HitRate, MAP) and offline evaluation frameworks β’ Must have: Experience running A/B tests and interpreting results for model impact β’ Should be familiar with real-time inference using Vertex AI , Cloud Run , or TF Serving β’ Should understand feature store concepts , embedding versioning , and serving pipelines β’ Good to have: Experience with streaming ingestion (Pub/Sub, Dataflow) for updating models or embeddings in near real-time β’ Good to have: Exposure to LLM-powered ranking or personalization , or hybrid recommender setups β’ Must follow MLOps practices β version control, CI/CD, monitoring, and infrastructure automation GCP Tools Experience: ML & AI : Vertex AI, Vertex Pipelines, Vertex AI Matching Engine, Kubeflow on GKE, AI Platform Embedding & Retrieval : Matching Engine, FAISS, ScaNN, Qdrant, GKE-hosted vector DBs (Milvus) Storage : BigQuery, Cloud Storage, Firestore Processing : Dataproc (PySpark), Dataflow (batch & stream) Ingestion : Pub/Sub, Cloud Functions, Cloud Run Serving : Vertex AI Online Prediction, TF Serving, Kubernetes-based custom APIs, Cloud Run CI/CD & IaC : GitHub Actions, GitLab CI Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JOB DESCRIPTION β’ Strong in Python and experience with Jupyter notebooks , Python packages like polars, pandas, numpy, scikit-learn, matplotlib, etc. β’ Must have: Experience with machine learning lifecycle , including data preparation , training , evaluation , and deployment β’ Must have: Hands-on experience with GCP services for ML & data science β’ Must have: Experience with Vector Search and Hybrid Search techniques β’ Must have: Experience with embeddings generation using models like BERT , Sentence Transformers , or custom models β’ Must have: Experience in embedding indexing and retrieval (e.g., Elastic, FAISS, ScaNN, Annoy) β’ Must have: Experience with LLMs and use cases like RAG (Retrieval-Augmented Generation) β’ Must have: Understanding of semantic vs lexical search paradigms β’ Must have: Experience with Learning to Rank (LTR) techniques and libraries (e.g., XGBoost, LightGBM with LTR support) β’ Should be proficient in SQL and BigQuery for analytics and feature generation β’ Should have experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark β’ Should have experience deploying models and services using Vertex AI , Cloud Run , or Cloud Functions β’ Should be comfortable working with BM25 ranking (via Elasticsearch or OpenSearch ) and blending with vector-based approaches β’ Good to have: Familiarity with Vertex AI Matching Engine for scalable vector retrieval β’ Good to have: Familiarity with TensorFlow Hub , Hugging Face , or other model repositories β’ Good to have: Experience with prompt engineering , context windowing , and embedding optimization for LLM-based systems β’ Should understand how to build end-to-end ML pipelines for search and ranking applications β’ Must have: Awareness of evaluation metrics for search relevance (e.g., precision@k , recall , nDCG , MRR ) β’ Should have exposure to CI/CD pipelines and model versioning practices GCP Tools Experience: ML & AI : Vertex AI, Vertex AI Matching Engine, AutoML, AI Platform Storage : BigQuery, Cloud Storage, Firestore Ingestion : Pub/Sub, Cloud Functions, Cloud Run Search : Vector Databases (e.g., Matching Engine, Qdrant on GKE), Elasticsearch/OpenSearch Compute : Cloud Run, Cloud Functions, Vertex Pipelines , Cloud Dataproc (Spark/PySpark) CI/CD & IaC : GitLab/GitHub Actions Show more Show less
Posted 2 weeks ago
0.0 - 9.0 years
0 Lacs
Kalina, Mumbai, Maharashtra
On-site
Kalina , Mumbai, Maharashtra, India Department DIGITAL Job posted on Jun 02, 2025 Employee Type Permanent Experience range (Years) 4 years - 9 years Key Responsibilities: Lead the design, development, and deployment of predictive models, optimization algorithms, and statistical analyses. Translate complex business problems into data science solutions using machine learning, deep learning, NLP, and statistical modeling. Collaborate with product managers, engineers, and business stakeholders to understand objectives and deliver actionable insights. Mentor junior data scientists and analysts, and drive best practices in experimentation, coding standards, and model governance. Conduct exploratory data analysis, feature engineering, and model validation to ensure accuracy and robustness. Present findings and recommendations clearly to both technical and non-technical stakeholders. Stay up to date with the latest trends in AI/ML, and recommend new tools or techniques as appropriate. Contribute to the development of data science pipelines and MLOps practices for scalable and reproducible model deployment. Qualifications: Masterβs or PhD in Computer Science, Statistics, Mathematics, Data Science, or related field. 8+ years of experience in data science, analytics, or machine learning roles. Strong proficiency in Python (pandas, scikit-learn, NumPy, TensorFlow/PyTorch) and SQL. Experience with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Solid understanding of data structures, algorithms, and statistical concepts. Hands-on experience in NLP, time series forecasting, recommender systems, or computer vision is a plus. Strong communication and storytelling skills, with the ability to influence decision-making. Experience with version control (Git), CI/CD pipelines, and ML lifecycle management tools (MLflow, Airflow, etc.) is desirable. Preferred Qualifications: Prior experience in industries like finance, healthcare, retail, telecom, or manufacturing. Experience working with large-scale data processing frameworks (e.g., Spark, Hadoop). Knowledge of generative AI or LLM-based applications is a bonus.
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Nagpur, Maharashtra
On-site
Job Information Date Opened 06/02/2025 Job Type Full time Industry Education Work Experience 1-3 years Salary βΉ20,000 - βΉ30,000 City Nagpur State/Province Maharashtra Country India Zip/Postal Code 440002 About Us Fireblaze AI School is a part of Fireblaze Technologies which was started in April 2018 with a Vision to Up-Skill and Train in emerging technologies. Mission Statement βTo Provide Measurable & Transformational Value To Learners Careerβ Vision Statement ββTo Be The Most Successful & Respected Job-Oriented Training Provider Globally.β We Focus widely on creating a huge digital impact. Hence Our Strong Presence over Digital Platforms are a must have thing for use. Job Description Deliver engaging classroom and/or online training sessions on topics including: Python for Data Science Data Analytics using Excel and SQL Statistics and Probability Machine Learning and Deep Learning Data Visualization using Power BI / Tableau Create and update course materials, projects, assignments, and quizzes. Provide hands-on training and real-world project guidance. Evaluate student performance, provide constructive feedback, and track progress. Stay updated with the latest trends, tools, and technologies in Data Science. Mentor students during capstone projects and industry case studies. Coordinate with the academic and operations team for batch planning and feedback. Assist with the development of new courses and curriculum as needed. Requirements Bachelorβs or Masterβs degree in Computer Science, Data Science, Statistics, or a related field. Proficiency in Python, SQL, and data handling libraries (Pandas, NumPy, etc.). Hands-on knowledge of machine learning algorithms and frameworks like Scikit-learn, TensorFlow, or Keras. Experience with visualization tools like Power BI, Tableau, or Matplotlib/Seaborn. Strong communication, presentation, and mentoring skills. Prior teaching/training experience is a strong advantage. Certification in Data Science or Machine Learning (preferred but not mandatory).
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Python Engineer to help design, develop, and maintain software applications. The ideal candidate will have experience with Python, FastAPI, and PostgreSQL, as well as a strong understanding of web development concepts. The candidate will be working on the development of high-performance and scalable solutions for our clients. The candidate will be responsible for managing the interchange of data between the server and the users. Your primary focus will be the development of all server-side logic, ensuring high performance and responsiveness to requests from the front-end. Responsibilities Design and develop applications using FastAPI, Django and/or Flask Work with PostgreSQL, Oracle and TimescaleDB to design and maintain databases Implement real-time data processing and storage solutions using Redis, RabbitMQ, and Kafka Utilize Numpy and Pandas to perform data analysis and manipulation tasks Integrate applications with Mongo and Influx databases to store and retrieve data Implement websockets for real-time data transfer and communication Use asyncio to write asynchronous code for improved performance Collaborate with cross-functional teams to identify and resolve technical issues Keep up-to-date with new technologies and programming languages Qualifications Bachelor's or Master's degree in Computer Science or related field Strong experience with Python and related technologies (Fast API, PostgreSQL, TimescaleDB, Redis, RabbitMQ, Kafka, Numpy Pandas, Mongo, Influx, Flask/Django, websockets, asyncio) Excellent understanding of software development concepts and data structures Strong problem-solving skills and the ability to think outside the box Excellent written and verbal communication skills Strong collaboration and teamwork skills You will be working with high frequency real time systems and integrating the front-end elements built by your co-workers into the application; therefore, a basic understanding of front-end technologies is necessary as well Ability to work independently and in a fast-paced environment Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Python Engineer to help design, develop, and maintain software applications. The ideal candidate will have experience with Python, FastAPI, and PostgreSQL, as well as a strong understanding of web development concepts. The candidate will be working on the development of high-performance and scalable solutions for our clients. The candidate will be responsible for managing the interchange of data between the server and the users. Your primary focus will be the development of all server-side logic, ensuring high performance and responsiveness to requests from the front-end. Responsibilities Design and develop applications using FastAPI, Django and/or Flask Work with PostgreSQL, Oracle and TimescaleDB to design and maintain databases Implement real-time data processing and storage solutions using Redis, RabbitMQ, and Kafka Utilize Numpy and Pandas to perform data analysis and manipulation tasks Integrate applications with Mongo and Influx databases to store and retrieve data Implement websockets for real-time data transfer and communication Use asyncio to write asynchronous code for improved performance Collaborate with cross-functional teams to identify and resolve technical issues Keep up-to-date with new technologies and programming languages Qualifications Bachelor's or Master's degree in Computer Science or related field Strong experience with Python and related technologies (Fast API, PostgreSQL, TimescaleDB, Redis, RabbitMQ, Kafka, Numpy Pandas, Mongo, Influx, Flask/Django, websockets, asyncio) Excellent understanding of software development concepts and data structures Strong problem-solving skills and the ability to think outside the box Excellent written and verbal communication skills Strong collaboration and teamwork skills You will be working with high frequency real time systems and integrating the front-end elements built by your co-workers into the application; therefore, a basic understanding of front-end technologies is necessary as well Ability to work independently and in a fast-paced environment Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bhubaneswar, Odisha, India
Remote
π Location: Bhubaneswar / Remote π Internship Certificate: Provided upon successful completion π° Stipend: Up to βΉ3,000/month (Performance-based) β³ Duration: 2β3 months (can be extended based on performance) π― Who Can Apply? We are looking for passionate tech enthusiasts who want to gain hands-on experience in backend and full-stack development with real-world financial and AI-based applications. Eligibility: β Freshers or students pursuing: B.Tech / B.E / M.Tech BCA / MCA B.Sc (IT) or equivalent β Strong interest in web backend/hybrid development β Basic understanding of Python, SQL, and REST APIs β Willingness to learn and explore real stock market data π§ Key Responsibilities Backend Development : Work with Python, Pandas, NumPy, PostgreSQL/SQL, and Node.js Data Management : Clean, analyze, and manage stock market datasets API Development : Build and integrate REST APIs for frontend consumption Frontend Collaboration : Coordinate with frontend developers using React, HTML, CSS, JS Cloud Deployment : Assist in deploying backend services to cloud environments AI/ML Integration : Support AI-driven features for financial apps (training, models, etc.) π Learning Opportunities Real-world exposure to financial data APIs & trading systems Collaboration with full-stack developers on scalable apps Introduction to containerization and CI/CD (Docker, GitHub Actions, etc.) Experience with cloud environments (AWS/GCP/Oracle Free Tier) Contribution to AI-based fintech tools π‘ Application Instructions π Share your GitHub profile (with relevant code or sample projects). π If you don't have one, complete this optional sample task : "Build a webpage listing companies on the left panel. When a company is clicked, show relevant stock price charts. Use any sample dataset or mock stock data." This helps us evaluate your practical understanding and creativity. π Perks & Benefits β Internship Completion Certificate & Letter of Recommendation β Hands-on experience in finance-tech and AI development β Exposure to industry-standard tools & code practices β Flexible work hours (100% remote) π Ready to kickstart your backend journey? Apply now and start building applications that matter. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2