Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Scientist – Recommender Systems Location: Bengaluru (Hybrid) Role Summary We’re seeking a skilled Data Scientist with deep expertise in recommender systems to design and deploy scalable personalization solutions. This role blends research, experimentation, and production-level implementation, with a focus on content-based and multi-modal recommendations using deep learning and cloud-native tools. Responsibilities Research, prototype, and implement recommendation models: two-tower, multi-tower, cross-encoder architectures Utilize text/image embeddings (CLIP, ViT, BERT) for content-based retrieval and matching Conduct semantic similarity analysis and deploy vector-based retrieval systems (FAISS, Qdrant, ScaNN) Perform large-scale data prep and feature engineering with Spark/PySpark and Dataproc Build ML pipelines using Vertex AI, Kubeflow, and orchestration on GKE Evaluate models using recommender metrics (nDCG, Recall@K, HitRate, MAP) and offline frameworks Drive model performance through A/B testing and real-time serving via Cloud Run or Vertex AI Address cold-start challenges with metadata and multi-modal input Collaborate with engineering for CI/CD, monitoring, and embedding lifecycle management Stay current with trends in LLM-powered ranking, hybrid retrieval, and personalization Required Skills Python proficiency with pandas, polars, numpy, scikit-learn, TensorFlow, PyTorch, transformers Hands-on experience with deep learning frameworks for recommender systems Solid grounding in embedding retrieval strategies and approximate nearest neighbor search GCP-native workflows: Vertex AI, Dataproc, Dataflow, Pub/Sub, Cloud Functions, Cloud Run Strong foundation in semantic search, user modeling, and personalization techniques Familiarity with MLOps best practices—CI/CD, infrastructure automation, monitoring Experience deploying models in production using containerized environments and Kubernetes Nice to Have Ranking models knowledge: DLRM, XGBoost, LightGBM Multi-modal retrieval experience (text + image + tabular features) Exposure to LLM-powered personalization or hybrid recommendation systems Understanding of real-time model updates and streaming ingestion
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary JD- Having a background in Retail will be a Big Plus. Advanced in Python: Proven experience with core libraries like pandas, numpy, scikit-learn, and matplotlib, plus advanced tools like statsmodels, xgboost, lightgbm, prophet, deep learning-based forecasting (e.g. Neural forecast). Advanced Forecasting Techniques: Experience with ensemble models, hierarchical forecasting, probabilistic forecasting, and multivariate time series. Pricing Models: Background in price elasticity modeling, good to have experience in optimization - at least one of the 2 resources Model Evaluation: Familiarity with time-series cross-validation, backtesting, and metrics such as MAE, MAPE, RMSE, SMAPE, and prediction intervals. SQL Proficiency: Ability to query and manage data from relational databases. Version Control: Comfortable working with Git/GitHub for collaboration and code management. Mandate skill- python and advanced forcesting skills with models like arima etc. Any shift- Second shift Location-Banglore
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary JD- Having a background in Retail will be a Big Plus. Advanced in Python: Proven experience with core libraries like pandas, numpy, scikit-learn, and matplotlib, plus advanced tools like statsmodels, xgboost, lightgbm, prophet, deep learning-based forecasting (e.g. Neural forecast). Advanced Forecasting Techniques: Experience with ensemble models, hierarchical forecasting, probabilistic forecasting, and multivariate time series. Pricing Models: Background in price elasticity modeling, good to have experience in optimization - at least one of the 2 resources Model Evaluation: Familiarity with time-series cross-validation, backtesting, and metrics such as MAE, MAPE, RMSE, SMAPE, and prediction intervals. SQL Proficiency: Ability to query and manage data from relational databases. Version Control: Comfortable working with Git/GitHub for collaboration and code management. Mandate skill- python and advanced forcesting skills with models like arima etc. Any shift- Second shift Location-Banglore
Posted 3 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Work closely with a lead-data scientist and help improve business processes leveraging data science tools & techniques in natural language and machine learning domain Understand business requirements and convert into analytical solution Analyze large amounts of information to discover trends and patterns Develop data science algorithms & generate actionable insights as per business needs and work closely with cross capability teams throughout solution development lifecycle from design to implementation & monitoring Understanding of Azure Cloud technologies such as Azure Data Factory, Azure SQL, Azure Blob Storage Azure data Bricks Manage day to day development tasks and stakeholder communication Document work appropriately for production support and transition readiness Acquire & enhance understanding of US Healthcare domain Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Masters in Mathematics, Computer Science, Statistics or Machine Learning - any or equivalent 12+ months hands on experience of solving real world business problems leveraging data science tools Working knowledge of NLP skills (like Text Classification/Entity Recognition/Transfer Learning Concepts etc.) Sound knowledge of unstructured text data handling & manipulation; Programming: Python/SQL knowledge Good Theoretical knowledge of some or most Machine learning techniques like Random Forest, Gradient Boosting Machine, XGBoost, CATBoost etc. Programming: knows R/Python with PyTorch/TensorFlow, Knows how to leverage pre trained models (Via Transfer Learning) Proven excellent written and oral communication skills Proven excellent problem-solving & story-telling abilities with analytical mindset Proven excellent interpersonal and team skills Preferred Qualifications Understanding & experience of SparkNLP framework Understanding of Responsible Use of AI, model fairness, bias assessment and its risk mitigations Exposure to RAG, LangChain, VectorDBs At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 3 weeks ago
0.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0150071 Date posted 07/07/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. About the Role We are seeking an innovative and skilled Principal AI/ML Engineer with a strong focus on designing and deploying scalable machine learning solutions. This role requires a strategic thinker who can architect production-ready solutions, collaborate closely with cross-functional teams, and ensure adherence to Takeda’s technical standards through participation in the Architecture Council. The ideal candidate has extensive experience in operationalizing ML models, MLOps workflows, and building systems aligned with healthcare standards. By leveraging cutting-edge machine learning and engineering principles, this role supports Takeda’s global mission of delivering transformative therapies to patients worldwide. How You Will Contribute Architect scalable and secure machine learning systems that integrate with Takeda’s enterprise platforms, including R&D, manufacturing, and clinical trial operations. Design and implement pipelines for model deployment, monitoring, and retraining using advanced MLOps tools such as MLflow, Airflow, and Databricks. Operationalize AI/ML models for production environments, ensuring efficient CI/CD workflows and reproducibility. Collaborate with Takeda’s Architecture Council to propose and refine AI/ML system designs, balancing technical excellence with strategic alignment. Implement monitoring systems to track model performance (accuracy, latency, drift) in a production setting, using tools such as Prometheus or Grafana. Ensure compliance with industry regulations (e.g., GxP, GDPR) and Takeda’s ethical AI standards in system deployment. Identify use cases where machine learning can deliver business value, and propose enterprise-level solutions aligned to strategic goals. Work with Databricks tools and platforms for model management and data workflows, optimizing solutions for scalability. Manage and document the lifecycle of deployed ML systems, including versioning, updates, and data flow architecture. Drive adoption of standardized architecture and MLOps frameworks across disparate teams within Takeda. Skills and Qualifications Education Bachelors or Master’s or Ph.D. in Computer Science, Software Engineering, Data Science, or related field. Experience At least 6-8 years of experience in machine learning system architecture, deployment, and MLOps, with a significant focus on operationalizing ML at scale. Proven track record in designing and advocating ML/AI solutions within enterprise architecture frameworks and council-level decision-making. Technical Skills Proficiency in deploying and managing machine learning pipelines using MLOps tools like MLflow, Airflow, Databricks, or Clear ML. Strong programming skills in Python and experience with machine learning libraries such as Scikit-learn, XGBoost, LightGBM, and TensorFlow. Deep understanding of CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions) for automated model deployment. Familiarity with Databricks tools and services for scalable data workflows and model management. Expertise in building robust observability and monitoring systems to track ML systems in production. Hands-on experience with classical machine learning techniques, such as random forests, decision trees, SVMs, and clustering methods. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation to enable automated deployments. Experience in handling regulatory considerations and compliance in healthcare AI/ML implementations (e.g., GxP, GDPR). Soft Skills Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills for influencing technical and non-technical stakeholders. Leadership ability to mentor teams and drive architecture-standardization initiatives. Ability to manage projects independently and advocate for AI/ML adoption across Takeda. Preferred Qualifications Real-world experience operationalizing machine learning for pharmaceutical domains, including drug discovery, patient stratification, and manufacturing process optimization. Familiarity with ethical AI principles and frameworks, aligned with FAIR data standards in healthcare. Publications or contributions to AI research or MLOps tooling communities. WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 5 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs No Meeting Days Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time
Posted 3 weeks ago
5.0 - 10.0 years
0 - 0 Lacs
Hyderabad
Remote
AI / Machine Learning / Data Science part time Work from Home (Any where in world) Warm Greetings from Excel Online Classes, We are a team of industry professionals running an institute that provides comprehensive online IT training, technical support, and development services. We are currently seeking AI / Machine Learning / Data Science Experts who are passionate about technology and can collaborate with us in their free time. If you're enthusiastic, committed, and ready to share your expertise, we would love to work with you! Were hiring for the following services: Online Training Online Development Online Technical Support Conducting Online Interviews Corporate Training Proof of Concept (POC) Projects Research & Development (R&D) We are looking for immediate joiners who can contribute in any of the above areas. If you're interested, please fill out the form using the link below: https://docs.google.com/forms/d/e/1FAIpQLSdvut0tujgMbBIQSc6M7qldtcjv8oL1ob5lBc2AlJNRAgD3Cw/viewform We also welcome referrals! If you know someone—friends, colleagues, or connections—who might be interested in: Teaching, developing, or providing tech support online Sharing domain knowledge (e.g., Banking, Insurance, etc.) Teaching foreign languages (e.g., Spanish, German, etc.) Learning or brushing up on technologies to clear interviews quickly Upskilling in new tools or frameworks for career growth Please feel free to forward this opportunity to them. For any queries, feel free to contact us at: excel.onlineclasses@gmail.com Thank you & Best Regards, Team Excel Online Classes excel.onlineclasses@gmail.com
Posted 3 weeks ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Red & White Education Pvt Ltd , founded in 2008, is Gujarat's leading educational institute. Accredited by NSDC and ISO, we focus on Integrity, Student-Centricity, Innovation, and Unity. Our goal is to equip students with industry-relevant skills and ensure they are employable globally. Join us for a successful career path. Salary - 30K CTC TO 35K CTC Job Description: Faculties guide students, deliver course materials, conduct lectures, assess performance, and provide mentorship. Strong communication skills and a commitment to supporting students are essential. Key Responsibilities Deliver high-quality lectures on AI, Machine Learning, and Data Science. Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in
Posted 3 weeks ago
14.0 - 20.0 years
35 - 50 Lacs
Pune
Hybrid
Designation/Role: Data Architect - Consulting Experience: +14 years Location ( India - Pune ) Technology and Data Architect to lead the consulting including design, development, and deployment of scalable data architectures and AI/ML solutions. The ideal candidate will be responsible for shaping enterprise-level data strategies and roadmaps and overseeing the technical delivery of end-to-end machine learning pipelines, from data ingestion to model serving. Technology Stack : Hands-on proficiency with: Data Tools : Spark, Kafka, Airflow, dbt, Snowflake, BigQuery, Databricks ML Frameworks : TensorFlow, PyTorch, scikit-learn, XGBoost MLOps : MLflow, Kubeflow, SageMaker, Vertex AI, Azure ML Languages : Python, SQL, optionally Scala or Java Role : Strong knowledge of cloud-native data architectures (data lakes, data warehouses, lakehouses), ETL/ELT pipelines, data integration, and data quality strategies. Knowledge of AI ML : feature engineering, training, validation, model management, and inference. Should have knowledge of Databases NO SQL and SQL and vector , proven capabilities in designing and building database schemas and solutions. Experience deploying LLMs or generative AI models in production (e.g., LangChain, Hugging Face, OpenAI API). Strong knowledge of App development , microservices application and event driven architecture and all aspects such as IDEs, Java runtime, interservice communication, Logging/Monitoring aspects, Authorization, Bounded-Context based modeling, domain driven design. Should have knowledge of enterprise architecture, decision making, build vs buy comparisons Experience in Customer Presentations, drafting consulting decks, presenting organization capabilities to external clients, building architecture for consulting assignments. Experience in As-Is and Gap Analysis with an eye for details. Experience in Technology Consulting , ability to articulate customer issues , explain technical debts and propose modern solutions. Should be able to provide succinct diagrams and workflows as needed and be able to lead large and complex business meetings involving multiple stakeholders. Strong knowledge of API Management and API Integration design patterns Ability to conduct and drive workshops Knowledge of deployment and branching strategies Good to have: Should have knowledge of Solution Architecture and Design Patterns Should have knowledge of Systems, Infrastructure, Networking, Protocols, API led architecture, State Machines, Workflow frameworks Should have certifications such as AWS Solutions Architect, Azure Architect, TOGAF or similar Strong knowledge of Kafka, MQ or Message Broker architecture and distributed caching Strong knowledge on skills in Jenkins, Docker, Kubernetes, OpenShift, RHEL, Helm and any other DevSecOps tools. Strong knowledge of cloud native components like VMs, Managed Databases, Storage buckets, Cloud formation, Monitoring tools, SQS, SNS , Lamda functions etc. Strong knowledge of Security Concepts and OWASP. Knowledge of Banking and Payment architectures with knowledge of Cards and Payment solutions, Card Management Systems, Auth Systems, ISO, EMV, Banking Compliance regulations etc. Experience in defining scope and sizing of work, estimation of work. Work closely with team and guide them technically, review code and manage technical delivery Knowledge of Industry Partnership models and Strategic Partnership experience Public Cloud Marketplace certification experience Ability to create technical blogs and white papers Infrastructure Management experience Knowledge of Developer Insights, DORA or any other such metrics Knowledge of MosCow or ITIL principles Experience in defining SLA, SLOs Experience in quality engineering processes Experience in Perf or chaos testing Experience with setting up monitoring tools Experience with Data Engineering AI ML solutioning Opus Technologies focuses on shaping the future of payments technology. With experience building highly innovative solutions and products, we combine our deep technology proficiency with unmatched domain expertise in Payments and Fintech, enabling us to deliver unparalleled quality and value in everything we do. Were headquartered in Alpharetta, Georgia, USA and our offshore software development center based out of Pune & Hyderabad offices in India. Please visit our website for more information. Supercharge your career with Opus https://opustechglobal.com/company/ https://opustechglobal.com/careers/ https://opustechglobal.com/resources/
Posted 4 weeks ago
8.0 - 12.0 years
3 - 9 Lacs
Bengaluru
On-site
Join our Team About Ericsson Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. Our innovative solutions empower individuals, businesses, and societies to explore their full potential in the Networked Society. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. Key Responsibilities Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Qualifications Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 8-12 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 768225
Posted 4 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the job Position : AI & Machine Learning Trainer Location : Hyderabad, Telangana Duration : 40 working days Compensation : ₹40,000–₹60,000 per month Job description : ExcelR is seeking experienced AI and Machine Learning trainers to deliver a comprehensive AI and Machine Learning curriculum. The training program will span 40 working days, with daily classes of 5–6 hours, tentatively from August 20, 2025, to October 21, 2025. The role involves teaching theoretical concepts, guiding hands-on projects, and mentoring students on real-world AI/ML applications. Skills Required : Foundational ML : Logistic Regression, SVM, Decision Trees, Ensemble Methods (Bagging, Random Forests), Boosting (Gradient Boosting, AdaBoost, XGBoost, LightGBM), PCA, Clustering (K-means, Hierarchical, DBSCAN), Market Basket Analysis, and Recommendation Systems. Deep Learning : ANN, CNN, RNN, LSTM, GRU, Transformers, GANs, Autoencoders, Diffusion Models (Stable Diffusion), and LLMs (e.g., GPT, BERT). Data Preprocessing : Standardization, normalization, encoding, train-test split, cross-validation, regularization (Lasso, Ridge, ElasticNet), and feature engineering. Generative AI : Text-to-image/audio/video generation, chatbots, sentiment analysis, and Retrieval-Augmented Generation (RAG) with FAISS. Web & Cloud Deployment : Building and deploying RESTful APIs using Flask/FastAPI, cloud computing with AWS (EC2, S3, Lambda) and Azure (Blob, App Services), and Streamlit for project deployment. Mathematics : Calculus, vector algebra, probability. Prompt Engineering : Zero-shot, few-shot, prompt tuning, and designing effective prompts. Case Studies : Hands-on projects (e.g., Bangalore housing prices, Breast cancer classification, Sales dataset). Qualifications : Education : Bachelor’s/Master’s in Computer Science, AI, ML, or related fields. Experience : 3–5 years in Machine Learning, Deep Learning, and Full Stack AI development, with hands-on experience in: Python and libraries like scikit-learn, TensorFlow, Keras, PyTorch. Generative AI (GANs, Diffusion Models, LLMs like GPT-2, LLaMA). Web development (Flask, FastAPI) and cloud deployment (AWS EC2, Azure). Tools like Hugging Face, FAISS, LangChain, and Streamlit. Teaching Skills : Prior teaching/training experience preferred, with the ability to explain complex topics (e.g., attention mechanisms, PCA) to undergraduate students. Certifications : Relevant certifications in AI/ML (e.g., AWS Certified Machine Learning, Google Professional ML Engineer) are a plus. Logistics : Local Candidates (Hyderabad) : College bus transportation and lunch provided at the college hostel. Non-Local Candidates : Accommodation and food provided at the college hostel.
Posted 4 weeks ago
2.0 - 4.0 years
3 - 6 Lacs
Pune
Hybrid
Role & responsibilities Deliver structured, interactive sessions on Python programming , Machine Learning, and core AI concepts. Develop and update training materials including lectures, coding exercises, real-world datasets, quizzes, and capstone projects. Teach core Python programming fundamentals , including: Data types, control structures, functions, OOPs concepts File handling, error handling, and standard libraries Working with popular frameworks (e.g., Flask, FastAPI) for basic app development and model deployment Teach essential ML topics , including: Supervised & unsupervised learning (regression, classification, clustering) Data preprocessing, feature selection, and transformation Model evaluation, performance tuning, pipelines Ensemble methods (e.g., Random Forest, XGBoost) Model deployment fundamentals Cover foundational AI topics , such as: Basics of neural networks and deep learning Fundamentals of Natural Language Processing (NLP) Introduction to Computer Vision (CV) Using pre-trained models and transfer learning Provide hands-on coding sessions using Python tools and libraries: Pandas, NumPy, Scikit-learn, TensorFlow, Keras, PyTorch. Mentor trainees through practical assignments, project implementation, and real-world scenarios. Stay up-to-date with industry trends and integrate relevant topics into the curriculum. Preferred candidate profile Strong Python programming skills with practical development experience. Hands-on expertise in ML libraries (Scikit-learn, XGBoost, Pandas, NumPy) and AI frameworks (TensorFlow, Keras, PyTorch). Familiarity with building and deploying Python applications using frameworks like Flask or FastAPI. Solid understanding of data pipelines, model evaluation, and basics of deployment. Exposure to version control (Git) and Jupyter notebooks. Cloud-based ML tools experience (AWS, Azure, or GCP) is an advantage.
Posted 4 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
• Develop strategies/solutions to solve problems in logical yet creative ways, leveraging state-of-the-art machine learning, deep learning and GEN AI techniques. • Technically lead a team of data scientists to produce project deliverables on time and with high quality. • Identify and address client needs in different domains, by analyzing large and complex data sets, processing, cleansing, and verifying the integrity of data, and performing exploratory data analysis (EDA) using state-of-the-art methods. • Select features, build and optimize classifiers/regressors, etc. using machine learning and deep learning techniques. • Enhance data collection procedures to include information that is relevant for building analytical systems, and ensure data quality and accuracy. • Perform ad-hoc analysis and present results in a clear manner to both technical and non-technical stakeholders. • Create custom reports and presentations with strong data visualization and storytelling skills to effectively communicate analytical conclusions to senior officials in a company and other stakeholders. • Expertise in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques. • Strong programming skills in Python. • Excellent communication and interpersonal skills, with the ability to present complex analytical concepts to both technical and non-technical stakeholders. Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production.
Posted 4 weeks ago
14.0 years
0 Lacs
India
Remote
Job Title: Director Analytics Experience: 14+ Years Job Location: Remote, India Position Overview: The AI solutions architect is responsible for building AI and Machine learning models and pipelines, with a focus on generative AI, LLMs and predictive modelling. The successful candidate will work closely with business owners to understand their data requirements and product specification in order to help them make data-related decisions and product design choices. The role will also require close work with data scientists, software engineers and developers to deliver solutions. Another crucial part will be defining the strategic priorities for AI and GenAI development across the company and educating both technical and business teams about the latest development in AI. The job role may also require you to learn new tools and technologies fast, which will be well supported by in-depth ML/AI knowledge as well as basic programming and scripting skills. You should have expertise in the design, development, management, and maintenance of systems and handling of large datasets. You should have hands-on experience with common machine learning and AI frameworks, and a good knowledge of AI and machine learning fundamentals (deep learning, regression, classification and clustering algorithms), retrieval augmented generation pipelines, as well as a good grasp of the design and evaluation of LLM-supported use cases. Roles & Responsibilities Design, develop, and oversee the implementation of end-to-end AI solutions, including LLM-based and general machine learning use cases. Define AI solution objectives and ensure alignment with business outcomes. Develop and ensure Interoperability between client systems, internal teams and other third-party vendors. Educate team members and bring both technical staff and business stakeholders up to speed with recent AI trends. Present key findings, recommend enhancements, and work with business owners to enhance product development. Evaluate predictive models as well as LLM-based technologies at scale. Act as primary respondent to questions regarding AI and Machine Learning applications. Build, enhance and maintain the data models, interfaces and application integration required to support growing and changing business needs. Collaborate with internal and external clients to respond to growing needs of the business, troubleshoot issues, and expand capabilities of the product. Meaningfully contribute to the strategic direction of AI initiatives within the company. Required Skills Overall 15+ Years out of which 5+ years' experience leading a data science/machine learning team with clear deployment experience. PhD or MSc in Computer Science/Machine Learning/AI, or related work experience with the design, building and evaluation of Machine Learning systems. 10+ years’ experience with Machine Learning ecosystem tools, including pytorch/tensorflow, scikit-learn, xgboost or equivalents. Proven ability to research and execute solutions based on novel algorithms and machine learning tooling. Ability to design, implement, test and deploy Machine Learning models. Experience working with and evaluating LLM-based pipeline, with an emphasis on retrieval augmented generation and prompting techniques is a plus. Familiarity with langchain/llamaindex/haystack/Azure AI studio, vector databases and retrieval techniques, or equivalent common tools in the emerging LLM-enabled tech stack is a plus. Proficiency in accessing and handling databases via SQL, Azure Data Factory or similar. Alternatively, familiarity with data storage/management systems or Big Data frameworks (like Hadoop, Spark) expected to be known or used. Proficiency in Software Development best practices such as Continuous Integration, Unit/Integration Testing, Code Reviews. Understanding of MLOps fundamentals, including orchestration tools, cloud compute, and observability tools. Proven ability to read research papers, critique and execute based on them. Knowledge of Revenue Cycle Management, Collections, or financial industries is desirable but not necessary. Ability to work both independently and in a team-based, fast-paced environment. Excellent communication skills, both written and verbal. Company Overview As Ensemble Health Partners Company, we're at the forefront of innovation, leveraging cutting-edge technology to drive meaningful impact in the Revenue Cycle Management landscape. Our future-forward technology combines tightly integrated data ingestion, workflow automation and business intelligence solutions on a modern cloud architecture. We have the second-largest share in the RCM space in the US Market with 10000+ professionals working in the organization. With 10 Technology Patents in our name, we believe the best results come from a combination of skilled and experienced team, proven and repeatable processes, and modern and flexible technologies. As a leading player in the industry, we offer an environment that fosters growth, creativity, and collaboration, where your expertise will be valued, and your contributions will make a difference. Why Join US We adapt emerging technologies to practical uses to deliver concrete solutions that bring maximum impact to providers’ bottom line. We currently have 10 Technology Patents in our name. We offer you a great organization to work for, where you will get to do best work of your career and grow with the team that is shaping the future of Revenue Cycle Management. We have our strong focus on Learning and development. We have the best Industry standard professional development policies to support the learning goals of our associates. We have flexible/ remote working/ working from home options. Benefits Health Benefits and Insurance Coverage for family and parents. Accidental Insurance for the associate. Compliant with all Labor Laws- Maternity benefits, Paternity Leaves. Company Swags- Welcome Packages, Work Anniversary Kits. Exclusive Referral Policy. Professional Development Program and Reimbursement. Remote work – flexibility to work from home.
Posted 4 weeks ago
6.0 years
0 Lacs
India
Remote
About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Job Title: Lead Data Scientist Mode of work : Remote Responsibilities Design and implement data-driven solutions to optimize customer experience metrics, reduce churn, and enhance customer satisfaction using statistical analysis, machine learning, and predictive modeling. Collaborate with CX teams, contact center operations, customer success, and product teams to gather requirements, understand customer journey objectives, and translate them into actionable analytical solutions. Perform exploratory data analysis (EDA) on customer interaction data, contact center metrics, survey responses, and behavioral data to identify pain points and opportunities for CX improvement. Build, validate, and deploy machine learning models for customer sentiment analysis, churn prediction, next-best-action recommendations, contact center forecasting, and customer lifetime value optimization. Develop CX dashboards and reports using BI tools to track key metrics like NPS, CSAT, FCR, AHT, and customer journey analytics to support strategic decision-making. Optimize model performance for real-time customer experience applications through hyperparameter tuning, A/B testing, and continuous performance monitoring. Contribute to customer data architecture and pipeline development to ensure scalable and reliable customer data flows across touchpoints (voice, chat, email, social, web). Document CX analytics methodologies, customer segmentation strategies, and model outcomes to ensure reproducibility and enable knowledge sharing across CX transformation initiatives. Mentor junior data scientists and analysts on CX-specific use cases, and participate in code reviews to maintain high-quality standards for customer-facing analytics. Skill Requirements Proven experience (at least 6+ years) in data science, analytics, and statistical modeling with specific focus on customer experience, contact center analytics, or customer behavior analysis, including strong understanding of CX metrics, customer journey mapping, and voice-of-customer analytics. Proficiency in Python and/or R for customer data analysis, sentiment analysis, and CX modeling applications. Experience with data analytics libraries such as pandas, NumPy, scikit-learn, and visualization tools like matplotlib, seaborn, or Plotly for customer insights and CX reporting. Experience with machine learning frameworks such as Scikit-learn, XGBoost, LightGBM, and familiarity with deep learning libraries (TensorFlow, PyTorch) for NLP applications in customer feedback analysis and chatbot optimization. Solid understanding of SQL and experience working with customer databases, contact center data warehouses, and CRM systems (e.g., PostgreSQL, MySQL, SQL Server, Salesforce, ServiceNow). Familiarity with data engineering tools and frameworks (e.g., Apache Airflow, dbt, Spark, or similar) for building and orchestrating customer data ETL pipelines and real-time streaming analytics. (Good to have) Knowledge of data governance, data quality frameworks, and data lake architectures. (good to have) Exposure to business intelligence (BI) tools such as Power BI, Tableau, or Looker for CX dashboarding, customer journey visualization, and executive reporting on customer experience metrics. Working knowledge of version control systems (e.g., Git) and collaborative development workflows for customer analytics projects. Strong problem-solving skills with customer-centric analytical thinking, and the ability to work independently and as part of cross-functional CX transformation teams. Excellent communication and presentation skills, with the ability to explain complex customer analytics concepts to non-technical stakeholders including CX executives, contact center managers, and customer success teams. Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.
Posted 4 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Zupee We are the biggest online gaming company with largest market share in the Indian gaming sector’s largest segment — Casual & Boardgame. We make skill-based games that spark joy in the everyday lives of people by engaging, entertaining, and enabling earning while at play. In the three plus years of existence, Zupee has been on a mission to improve people’s lives by boosting their learning ability, skills, and cognitive aptitude through scientifically designed gaming experiences. Zupee presents a timeout from the stressful environments we live in today and sparks joy in the lives of people through its games. Zupee invests in people and bets on creating excellent user experiences to drive phenomenal growth. We have been running profitable at EBT level since Q3, 2020 while closing Series B funding at $102 million, at a valuation of $600 million. Zupee is all set to transform from a fast-growing startup to a firm contender for the biggest gaming studio in India.. ABOUT THE JOB Role: Lead Machine Learning Engineer Reports to: Manager- Data Scientist Location: Gurgaon Job Summary: We seek a an individual to drive innovation in AI ML-based algorithms and personalized offer experiences. This role will focus on designing and implementing advanced machine learning models, including reinforcement learning techniques like Contextual Bandits, Q-learning, SARSA, and more. By leveraging algorithmic expertise in classical ML and statistical methods, you will develop solutions that optimize pricing strategies, improve customer value, and drive measurable business impact. Qualifications: - 6+ years in machine learning, 4+ years in reinforcement learning, recommendation systems, pricing algorithms, pattern recognition, or artificial intelligence. - Expertise in classical ML techniques (e.g., Classification, Clustering, Regression) using algorithms like XGBoost, Random Forest, SVM, and KMeans, with hands-on experience in RL methods such as Contextual Bandits, Q-learning, SARSA, and Bayesian approaches for pricing optimization. - Proficiency in handling tabular data, including sparsity, cardinality analysis, standardization, and encoding. - Proficient in Python and SQL (including Window Functions, Group By, Joins, and Partitioning). - Experience with ML frameworks and libraries such as scikit-learn, TensorFlow, and PyTorch - Knowledge of controlled experimentation techniques, including causal A/B testing and multivariate testing. Key Responsibilities Algorithm Development: Conceptualize, design, and implement state-of-the-art ML models for dynamic pricing and personalized recommendations. Reinforcement Learning Expertise: Develop and apply RL techniques, including Contextual Bandits, Q-learning, SARSA, and concepts like Thompson Sampling and Bayesian Optimization, to solve pricing and optimization challenges. AI Agents for Pricing: Build AI-driven pricing agents that incorporate consumer behavior, demand elasticity, and competitive insights to optimize revenue and conversion. Rapid ML Prototyping: Experience in quickly building, testing, and iterating on ML prototypes to validate ideas and refine algorithms. Feature Engineering: Engineer large-scale consumer behavioral feature stores to support ML models, ensuring scalability and performance. Cross-Functional Collaboration: Work closely with Marketing, Product, and Sales teams to ensure solutions align with strategic objectives and deliver measurable impact. Controlled Experiments: Design, analyze, and troubleshoot A/B and multivariate tests to validate the effectiveness of your models. Required Skills and Experience UPLIFT MODELING BAYESIAN OPTIMIZATION MULTI-ARMED BANDITS CONTEXTUAL BANDITS PRICING OPTIMIZATION REINFORCEMENT LEARNING
Posted 4 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Zupee We are the biggest online gaming company with largest market share in the Indian gaming sector’s largest segment — Casual & Boardgame. We make skill-based games that spark joy in the everyday lives of people by engaging, entertaining, and enabling earning while at play. In the three plus years of existence, Zupee has been on a mission to improve people’s lives by boosting their learning ability, skills, and cognitive aptitude through scientifically designed gaming experiences. Zupee presents a timeout from the stressful environments we live in today and sparks joy in the lives of people through its games. Zupee invests in people and bets on creating excellent user experiences to drive phenomenal growth. We have been running profitable at EBT level since Q3, 2020 while closing Series B funding at $102 million, at a valuation of $600 million. Zupee is all set to transform from a fast-growing startup to a firm contender for the biggest gaming studio in India.. ABOUT THE JOB Role: Senior Machine Learning Engineer Reports to: Manager- Data Scientist Location: Gurgaon Job Summary: We seek a an individual to drive innovation in AI ML-based algorithms and personalized offer experiences. This role will focus on designing and implementing advanced machine learning models, including reinforcement learning techniques like Contextual Bandits, Q-learning, SARSA, and more. By leveraging algorithmic expertise in classical ML and statistical methods, you will develop solutions that optimize pricing strategies, improve customer value, and drive measurable business impact. Qualifications: - 3+ years in machine learning, 2+ years in reinforcement learning, recommendation systems, pricing algorithms, pattern recognition, or artificial intelligence. - Expertise in classical ML techniques (e.g., Classification, Clustering, Regression) using algorithms like XGBoost, Random Forest, SVM, and KMeans, with hands-on experience in RL methods such as Contextual Bandits, Q-learning, SARSA, and Bayesian approaches for pricing optimization. - Proficiency in handling tabular data, including sparsity, cardinality analysis, standardization, and encoding. - Proficient in Python and SQL (including Window Functions, Group By, Joins, and Partitioning). - Experience with ML frameworks and libraries such as scikit-learn, TensorFlow, and PyTorch - Knowledge of controlled experimentation techniques, including causal A/B testing and multivariate testing. Key Responsibilities - Algorithm Development: Conceptualize, design, and implement state-of-the-art ML models for dynamic pricing and personalized recommendations. - Reinforcement Learning Expertise: Develop and apply RL techniques, including Contextual Bandits, Q-learning, SARSA, and concepts like Thompson Sampling and Bayesian Optimization, to solve pricing and optimization challenges. -AI Agents for Pricing: Build AI-driven pricing agents that incorporate consumer behavior, demand elasticity, and competitive insights to optimize revenue and conversion. - Rapid ML Prototyping: Experience in quickly building, testing, and iterating on ML prototypes to validate ideas and refine algorithms. -Feature Engineering: Engineer large-scale consumer behavioral feature stores to support ML models, ensuring scalability and performance. -Cross-Functional Collaboration: Work closely with Marketing, Product, and Sales teams to ensure solutions align with strategic objectives and deliver measurable impact. -Controlled Experiments: Design, analyze, and troubleshoot A/B and multivariate tests to validate the effectiveness of your models. Required Skills and Experience UPLIFT MODELING BAYESIAN OPTIMIZATION MULTI-ARMED BANDITS CONTEXTUAL BANDITS PRICING OPTIMIZATION REINFORCEMENT LEARNING
Posted 4 weeks ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Red & White Education Pvt Ltd , founded in 2008, is Gujarat's leading educational institute. Accredited by NSDC and ISO, we focus on Integrity, Student-Centricity, Innovation, and Unity. Our goal is to equip students with industry-relevant skills and ensure they are employable globally. Join us for a successful career path. Salary - 30K CTC TO 35K CTC Job Description: Faculties guide students, deliver course materials, conduct lectures, assess performance, and provide mentorship. Strong communication skills and a commitment to supporting students are essential. Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics.
Posted 4 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Team: Data & Decision Science (D&DS) Experience: 5–8 years Location: Remote (India) Do you want to help transform the global economy? Join the movement disrupting the financial world and changing how businesses gain access to the working capital they need to grow. As the largest online platform for working capital, we serve over one million businesses in 160 countries, representing more than $10.5 trillion in annual sales. Headquartered in Kansas City, C2FO has more than 500 employees worldwide, with operations throughout Europe, India, Asia Pacific, and Australia. Here at C2FO, we value the quality of our technical solutions and are passionate about building the right thing, the right way to best solve the problem at hand. But beyond that, we also value our employees' work-life balance and promote a continuous learning culture. We host bi-annual hackathons, have multiple book clubs focused on constant growth, and embrace a remote-first working environment. If you want to work at a place where your voice will be heard and can make a real impact, C2FO is the place for you. About C2FO’s Data & Decision Science Team The Data & Decision Science (D&DS) team is one of the most dynamic and cross-functional teams at C2FO. It comprises four pillars: Data Engineering, Data Science & Analytics, Data Operations, and Business Intelligence. Spread across India and the US, our teams power C2FO’s global growth by building scalable, high-quality data solutions and driving data-backed innovation. We manage the full lifecycle of data — from architecture and governance to advanced modeling, business intelligence, and AI. Our team enables smarter decisions, deeper insights, and practical experimentation across C2FO’s ecosystem. About the Role We are looking for a Senior Data Scientist who thrives at the intersection of business impact and technical rigor. This role offers the opportunity to work on a wide variety of problems — from traditional ML (forecasting, credit scoring, churn prediction) to cutting-edge applications of large language models (LLMs) in customer engagement and retention. You’ll be part of a fast-paced, collaborative environment, where initiative, ownership, and curiosity are rewarded. Key Responsibilities Build ML models and scalable data solutions for business-critical problems like dynamic discounting, customer retention, LTV prediction, credit scoring, and more Design, experiment, and iterate quickly on MVPs that solve real business pain points Work cross-functionally to translate ambiguous business challenges into data science problems Collaborate with Data Engineering and Data Governance to ensure robust data validation and quality frameworks Conduct deep-dive analyses to extract actionable insights from large, complex datasets Contribute to a culture of learning, innovation, and experimentation What We’re Looking For 5+ years of hands-on experience in data science or applied machine learning Strong foundation in mathematics, statistics, and ML concepts Experience building and deploying ML models in production environments Proficiency in Python and data manipulation tools (SQL, Pandas) Familiarity with libraries like Scikit-learn, XGBoost, PyTorch, etc. Experience in hypothesis testing and experimentation frameworks A pragmatic mindset: build fast, learn fast, iterate fastExceptional problem-solving, communication, and stakeholder management skills Bonus: experience in fintech or financial services Benefits At C2FO, we care for our customers and people – the vital human capital that helps our customers thrive. That's why we offer a comprehensive benefits package, flexible work options for work/life balance, volunteer time off, and more. Learn more about our benefits here. Commitment to Diversity and Inclusion As an Equal Opportunity Employer, we value diversity and equality and empower our team members to bring their authentic selves to work daily. We recognize the power of inclusion, emphasizing that each team member was chosen for their unique ability to contribute to the overall success of our mission. Our goal is to create a workplace that reflects the communities we serve and our global, multicultural clients. We do not discriminate based on race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status, or any other basis covered by appropriate law. All employment decisions are based on qualifications, merit, and business needs.
Posted 4 weeks ago
5.0 years
0 Lacs
India
Remote
Position Title: MLOps Engineer Experience: 5+ Years Location: Remote Employment Type: Full-Time About the Role: We are looking for an experienced MLOps Engineer to lead the design, deployment, and maintenance of scalable and production-grade machine learning infrastructure. The ideal candidate will have a strong foundation in MLOps principles, expertise in GCP (Google Cloud Platform), and a proven track record in operationalizing ML models in cloud environments. Key Responsibilities: Design, build, and maintain scalable ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for training, validation, deployment, and monitoring using Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Collaborate closely with Data Scientists to transition models from experimentation to production. Implement robust monitoring systems for model drift, performance degradation, and data quality issues. Manage containerized ML workloads using Docker and Kubernetes (GKE). Set up and manage CI/CD workflows for ML systems using Cloud Build, Jenkins, Bitbucket, or similar tools. Ensure model security, versioning, governance, and compliance throughout the ML lifecycle. Create and maintain documentation, reusable templates, and artifacts for reproducibility and audit readiness. Required Skills & Experience: Minimum 5 years of experience in MLOps, ML Engineering, or related roles. Strong programming skills in Python with experience in ML frameworks and libraries. Hands-on experience with GCP services including Vertex AI, BigQuery, GKE, and Dataflow. Solid understanding of machine learning concepts and algorithms such as XGBoost and classification models. Experience with container orchestration using Docker and Kubernetes. Proficiency in implementing CI/CD practices for ML workflows. Strong analytical, problem-solving, and communication skills.
Posted 4 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Technical Skills Background : Bachelors degree in computer science / information technology, or masters degree in Statistics, Economics, Mathematics. Demonstrated experience in independently driving business insights through data and AI. Technical Proficiency Hands-on experience with data tools (e.g., Oracle, SQL Server, MySQL, PostgreSQL, S3 buckets) with ability to work with large data sets. Expertise in writing complex SQL queries for data mining, data manipulation and analysis. Solid understanding of linear algebra, calculus, probability, and optimization techniques Familiarity with cloud database services (AWS RDS, Azure SQL Database) is a must. Solid understanding of atleast one of the areas below : Data Science (K means, Clustering, Multivariate Regression, Logistic Regression, ML algorithms such as RF, GBM / XgBoost, SVM, NLP algos such as TFIDF, Word2vec, LDA, Exploratory Analytics (T-tests), Decision Trees. Familiarity with frameworks like TensorFlow, PyTorch, and Scikit-Learn is a plus. ii. Data Engineering : (Data architecture principles, tuning database performance, including indexing, partitioning, and query optimization, implementing robust backup and recovery strategies, Java, C++, etc.). iii. Data Analysis and Visualization - (SQL, R, Python, Advanced Excel, VBA, React, Angular, Power BI, Tableau) Soft Skills Experience in project management, stakeholder management, and leading multi-disciplinary teams for atleast 3 - 4 years. Problem-Solving Skills Strong analytical and critical-thinking skills. Adaptability : Ability to adapt to innovative technologies and changing requirements Excellent written and oral communication skills with strong ability of Visual storytelling Self-driven individual with ability to be laser focused on delivering outcomes with speed and agility. (ref:hirist.tech)
Posted 4 weeks ago
8.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Data Scientist : Robust knowledge of machine learning model development, including strong familiarity with classical ML techniques, both supervised and unsupervised approaches. Proficiency in Python, with a solid understanding of key data science packages such as scikit-learn, pandas, NumPy, xgboost, LightGBM, and nltk. Strong knowledge of SQL, ideally with experience in Snowflake’s SQL flavour. Familiarity with agile methodologies. Knowledge of version control and git-based workflows. Experience with Spark, particularly PySpark. Familiarity with cloud-based environments, with a focus on AWS, especially SageMaker. Total Experience Expected: 08-10 years Qualifications Professional Degree with Data Science experience Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About The Role We are seeking a technically proficient Risk Modeling Analyst to develop and implement advanced risk models for our banking practice. The ideal candidate will combine traditional risk modeling expertise with machine learning to enhance our analytical capabilities. Key Responsibilities Develop and validate risk models (application, behavioral, and fraud scorecards) using both traditional and ML approaches Implement machine learning techniques to analyze banking data and improve risk assessment Collaborate with cross-functional teams to operationalize analytical solutions Design and maintain Databricks workflows for efficient model development and deployment Ensure all models meet regulatory compliance standards Communicate technical findings to business stakeholders through clear reporting Technical Requirements Core Competencies: 3-5 years' hands-on experience in banking risk analytics or financial modeling Advanced proficiency in Python (including ML libraries: Scikit-learn, XGBoost) and SQL Practical experience applying machine learning to risk modeling challenges Strong understanding of risk scorecard development and validation Experience working with Databricks for data processing and analytics Additional Valued Skills Familiarity with cloud platforms (AWS, Azure) Knowledge of regulatory frameworks (Basel III/IV, IFRS 9) Experience with visualization tools (Tableau, Power BI)
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About The Role We are seeking a technically proficient Risk Modeling Analyst to develop and implement advanced risk models for our banking practice. The ideal candidate will combine traditional risk modeling expertise with machine learning to enhance our analytical capabilities. Key Responsibilities Develop and validate risk models (application, behavioral, and fraud scorecards) using both traditional and ML approaches Implement machine learning techniques to analyze banking data and improve risk assessment Collaborate with cross-functional teams to operationalize analytical solutions Design and maintain Databricks workflows for efficient model development and deployment Ensure all models meet regulatory compliance standards Communicate technical findings to business stakeholders through clear reporting Technical Requirements Core Competencies: 3-5 years' hands-on experience in banking risk analytics or financial modeling Advanced proficiency in Python (including ML libraries: Scikit-learn, XGBoost) and SQL Practical experience applying machine learning to risk modeling challenges Strong understanding of risk scorecard development and validation Experience working with Databricks for data processing and analytics Additional Valued Skills Familiarity with cloud platforms (AWS, Azure) Knowledge of regulatory frameworks (Basel III/IV, IFRS 9) Experience with visualization tools (Tableau, Power BI)
Posted 1 month ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As An Engineer You Will Be Responsible For Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About The Team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A3024781
Posted 1 month ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As An Engineer You Will Be Responsible For Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About The Team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A3024851
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough