Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 years
0 Lacs
Karnataka, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Delhi, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Maharashtra, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less
Posted 3 days ago
0 years
0 Lacs
India
Remote
Artificial Intelligence & Machine Learning Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is a forward-thinking edtech startup offering project-driven virtual internships that prepare students for today’s competitive tech landscape. Our AI & ML Internship is designed to immerse you in real-world applications of machine learning and artificial intelligence, helping you develop job-ready skills through hands-on projects. 🚀 Internship Overview As an AI & ML Intern , you will explore machine learning algorithms, build predictive models, and work on projects that mimic real-world use cases—ranging from recommendation systems to AI-based automation tools. You’ll gain experience with Python, Scikit-learn, TensorFlow , and more. 🔧 Key Responsibilities Work on datasets to clean, preprocess, and prepare for model training Implement machine learning algorithms (regression, classification, clustering, etc.) Build and test models using Scikit-learn, TensorFlow, Keras , or PyTorch Analyze model performance and optimize using evaluation metrics Collaborate with mentors to develop AI solutions for business or academic use cases Present findings and document all steps of the model-building process ✅ Qualifications Pursuing or recently completed a degree in Computer Science, Data Science, AI/ML, or related fields Proficient in Python and familiar with data science libraries (NumPy, Pandas, Matplotlib) Basic understanding of machine learning concepts and algorithms Experience with tools like Jupyter Notebook , Google Colab , or similar platforms Strong analytical mindset and interest in solving real-world problems using AI Enthusiastic about learning and exploring new technologies 🎓 What You’ll Gain Hands-on experience with real-world AI and ML projects Exposure to end-to-end model development workflows A strong project portfolio to showcase your skills Internship Certificate upon successful completion Letter of Recommendation for high-performing interns Opportunity for a Full-Time Offer based on performance Show more Show less
Posted 3 days ago
0 years
0 Lacs
India
Remote
Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models Show more Show less
Posted 3 days ago
0 years
0 Lacs
India
Remote
📊 Data Analyst Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is an edtech startup focused on delivering industry-aligned, project-based virtual internships. Our Data Analyst Internship is designed to equip students and recent graduates with the analytical skills and practical tools needed to work with real-world data and support business decisions. 🚀 Internship Overview As a Data Analyst Intern, you will work on live projects involving data collection, cleaning, analysis, and visualization. You will gain hands-on experience using tools like Excel, SQL, Python, and Power BI/Tableau to extract insights and create impactful reports. 🔧 Key Responsibilities Gather, clean, and organize raw data from multiple sources Perform exploratory data analysis (EDA) to uncover patterns and trends Write efficient SQL queries to retrieve and manipulate data Create interactive dashboards and visual reports using Power BI or Tableau Use Python (Pandas, NumPy, Matplotlib) for data processing and analysis Present findings and recommendations through reports and presentations Collaborate with mentors and cross-functional teams on assigned projects ✅ Qualifications Pursuing or recently completed a degree in Data Science, Computer Science, IT, Statistics, Economics, or a related field Basic knowledge of Excel, SQL, and Python Understanding of data visualization and reporting concepts Strong analytical and problem-solving skills Detail-oriented, with good communication and documentation abilities Eagerness to learn and apply analytical techniques to real business problems 🎓 What You’ll Gain Practical experience in data analysis, reporting, and business intelligence Exposure to industry tools and real-life data scenarios A portfolio of dashboards and reports to showcase in interviews Internship Certificate upon successful completion Letter of Recommendation for top performers Show more Show less
Posted 3 days ago
0 years
0 Lacs
India
Remote
🧠 Data Science Intern – Remote | Explore the World of AI & Data Are you fascinated by machine learning, data modeling, and real-world applications of AI? If you're ready to dive into the exciting world of data science, join Skillfied Mentor as a Data Science Intern and start building your future in tech. 📍 Location: Remote / Virtual 💼 Job Type: Internship (Unpaid) 🕒 Schedule: Flexible working hours 🌟 About the Internship: As a Data Science Intern , you'll get hands-on exposure to real data problems, machine learning concepts, and practical projects. This internship is designed to give you experience that matters — even without prior industry background. 🔹 Work with real datasets to build and test models 🔹 Learn tools like Python, Pandas, NumPy, Scikit-Learn, and Jupyter Notebooks 🔹 Understand the basics of machine learning and data preprocessing 🔹 Collaborate with a remote team to solve business-related challenges 🔹 Apply statistics and coding to derive data-driven solutions 🔍 You’re a Great Fit If You: ✅ Have basic Python knowledge or are eager to learn ✅ Are curious about AI, data modeling, and machine learning ✅ Can dedicate 5–7 hours per week (flexibly) ✅ Are a self-learner and motivated to grow in the data science field ✅ Want to build a strong project portfolio with real use cases 🎁 What You’ll Gain: 📜 Certificate of Completion 📂 Real Projects to Showcase Your Skills 🧠 Practical Knowledge of Data Science Workflows 📈 Experience with Tools Used by Professionals ⏳ Last Date to Apply: 20th June 2025 Whether you’re a student, fresher, or career switcher, this internship is your entry point into the dynamic world of Data Science . 👉 Apply now and bring your data science journey to life with Skillfied Mentor. Show more Show less
Posted 3 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Data Scientist In this role, you’ll drive and embed the design and implementation of data science tools and methods, which harness our data to drive market-leading purpose customer solutions Day-to-day, you’ll act as a subject matter expert and articulate advanced data and analytics opportunities, bringing them to life through data visualisation If you’re ready for a new challenge, and are interested in identifying opportunities to support external customers by using your data science expertise, this could be the role for you We're offering this role at vice president level What you’ll do We’re looking for someone to understand the requirements and needs of our business stakeholders. You’ll develop good relationships with them, form hypotheses, and identify suitable data and analytics solutions to meet their needs and to achieve our business strategy. You’ll be maintaining and developing external curiosity around new and emerging trends within data science, keeping up to date with emerging trends and tooling and sharing updates within and outside of the team. You’ll Also Be Responsible For Proactively bringing together statistical, mathematical, machine-learning and software engineering skills to consider multiple solutions, techniques, and algorithms Implementing ethically sound models end-to-end and applying software engineering and a product development lens to complex business problems Working with and leading both direct reports and wider teams in an Agile way within multi-disciplinary data to achieve agreed project and Scrum outcomes Using your data translation skills to work closely with business stakeholders to define business questions, problems or opportunities that can be supported through advanced analytics Selecting, building, training, and testing complex machine models, considering model valuation, model risk, governance, and ethics throughout to implement and scale models The skills you’ll need To be successful in this role, you’ll need evidence of project implementation and work experience gained in a data-analysis-related field as part of a multi-disciplinary team. We’ll also expect you to hold an undergraduate or a master’s degree in Data science, Statistics, Computer science, or related field. You’ll also need an experience of 10 years with statistical software, database languages, big data technologies, cloud environments and machine learning on large data sets. And we’ll look to you to bring the ability to demonstrate leadership, self-direction and a willingness to both teach others and learn new techniques. Additionally, You’ll Need Experience of deploying machine learning models into a production environment Proficiency in Python and relevant libraries such as Pandas, NumPy, Scikit-learn coupled with experience in data visualisation tools. Extensive work experience with AWS Sage maker , including expertise in statistical data analysis, machine learning models, LLMs, and data management principles Effective verbal and written communication skills , the ability to adapt communication style to a specific audience and mentoring junior team members Show more Show less
Posted 3 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description: • Graduate degree in a quantitative field (CS, statistics, applied mathematics, machine learning, or related discipline) • Good programming skills in Python with strong working knowledge of Python’s numerical, data analysis, or AI frameworks such as NumPy, Pandas, Scikit-learn, etc. • Experience with SQL, Excel, Tableau/ Power BI, PowerPoint • Predictive modelling experience in Python (Time Series/ Multivariable/ Causal) • Experience applying various machine learning techniques and understanding the key parameters that affect their performance • Experience of building systems that capture and utilize large data sets to quantify performance via metrics or KPIs • Excellent verbal and written communication • Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects. Roles & Responsibilities: • Lead a team of Data Engineers, Analysts and Data scientists to carry out following activities: • Connect with internal / external POC to understand the business requirements • Coordinate with right POC to gather all relevant data artifacts, anecdotes, and hypothesis • Create project plan and sprints for milestones / deliverables • Spin VM, create and optimize clusters for Data Science workflows • Create data pipelines to ingest data effectively • Assure the quality of data with proactive checks and resolve the gaps • Carry out EDA, Feature Engineering & Define performance metrics prior to run relevant ML/DL algorithms • Research whether similar solutions have been already developed before building ML models • Create optimized data models to query relevant data efficiently • Run relevant ML / DL algorithms for business goal seek • Optimize and validate these ML / DL models to scale • Create light applications, simulators, and scenario builders to help business consume the end outputs • Create test cases and test the codes pre-production for possible bugs and resolve these bugs proactively • Integrate and operationalize the models in client ecosystem • Document project artifacts and log failures and exceptions. • Measure, articulate impact of DS projects on business metrics and finetune the workflow based on feedbacks Show more Show less
Posted 3 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
Remote
Position : AI Engineer Location: Trivandrum (Remote or Hybrid ) Type: Full-time Start Date: Immediate Company : Turilytix.ai About the Role : As an AI Engineer at Turilytix, you’ll build real-time machine learning systems that power BIG-AI , our no-code platform used in paper, supply chain, and IT operations. Help deploy ML in the real world no sandbox, no limits. Responsibilities : • Design and deploy time-series and anomaly detection models • Build scalable pipelines for streaming and batch predictions • Integrate ML models into live product environments • Optimize models for on-prem or cloud deployments Required Skills : • Python (NumPy, Pandas, Scikit-learn) • Time-series ML, classification, regression • ML Ops tools: MLFlow, Docker, FastAPI • OS: Linux • Bonus: Edge AI, Git, ONNX, Kubernetes What You Get : • Real-world AI use cases • Fast-paced learning and ownership • Hybrid flexibility, global impact Apply at: hr@turilytix.ai Show more Show less
Posted 3 days ago
0 years
0 Lacs
India
Remote
📍 Location: Remote 💼 Type: Internship (Unpaid) 🕒 Duration: Flexible (Learn at your own pace!) 📅 Application Deadline: 7th June 2025 📊 About TechNest Intern TechNest Intern , we are committed to helping learners move beyond theory by providing a practical, remote-first internship experience in the field of Data Science . This program is specially designed for students and freshers looking to apply their analytical skills on real datasets , develop ML models , and understand how data powers smart decisions in today’s world. 🚀 💼 Role: Data Science Intern Curious about the patterns behind the numbers? Want to work with data that drives real impact? This role is perfect for aspiring data scientists who want to get hands-on with data cleaning, analysis, machine learning, and storytelling. You’ll work on mini-projects, gain mentorship, and explore tools like Python, Pandas, and scikit-learn, while also improving your data visualization and communication skills. 📌 Key Responsibilities 📊 Collect, clean, and explore real-world datasets 📈 Perform Exploratory Data Analysis (EDA) to find trends and insights 🤖 Build basic Machine Learning models using libraries like scikit-learn 📚 Work with tools like NumPy, Pandas, Matplotlib, Seaborn, Jupyter 🧠 Understand feature engineering, model evaluation, and tuning 🗂️ Present insights through charts, notebooks, or dashboards 🔍 Learn and apply statistical thinking to solve problems 👥 Who Should Apply? 🎓 Students or recent graduates with interest in Data Science / Analytics 💡 Individuals familiar with Python and basic statistics 📉 Learners who enjoy working with numbers and uncovering patterns 🧠 Problem-solvers looking to turn data into meaningful stories 🚀 Beginners who want to build a portfolio-ready data project 🎁 Perks & Learning Outcomes 📜 Official Offer Letter & Certificate of Completion 🌍 Remote & Flexible Schedule — no fixed hours, learn your way 💻 Project-Based Learning — build a strong data science portfolio 🧠 Learn tools like Pandas, NumPy, Matplotlib, Seaborn, scikit-learn, Power BI 📊 Get mentorship in EDA, Machine Learning, and Data Storytelling 🏆 ✨ New! "Intern of the Week Certificate" — awarded weekly to interns showing outstanding effort, innovation, and growth 🎯 Gain exposure to how real-world data challenges are tackled in tech and business 🚀 How to Apply 📩 Ready to dive into data and level up your skills? Submit your application and start your journey to becoming a data-driven decision maker ! Show more Show less
Posted 3 days ago
0 years
0 Lacs
India
On-site
Job Role : Financial Analysts and Advisors For Workflow Annotation Specialist Project Type: Contract-based / Freelance / Part-time – 1 Month Job Overview: We are seeking domain experts to participate in a Workflow Annotation Project . The role involves documenting and annotating the step-by-step workflows of key tasks within the candidate’s area of expertise. The goal is to capture real-world processes in a structured format for AI training and process optimization purposes. Domain Expertise Required : Collect market and company data, build/maintain financial models, craft decks, track portfolios, run risk and scenario analyses, develop client recommendations, and manage CRM workflows. Tools & Technologies You May have Worked: Commercial Software ‑ Bloomberg Terminal, Refinitiv Eikon, FactSet, Excel, PowerPoint, Salesforce FSC, Redtail, Wealthbox, Orion Advisor Tech, Morningstar Office, BlackRock Aladdin, Riskalyze, Tolerisk, eMoney Advisor, MoneyGuidePro, Tableau, Power BI. Open / Free Software ‑ LibreOffice Calc, Google Sheets, Python (Pandas, yfinance, NumPy, SciPy, Matplotlib), R (QuantLib, tidyverse), SuiteCRM, EspoCRM, Plotly Dash, Streamlit, Portfolio Performance, Ghostfolio, Yahoo Finance API, Alpha Vantage, IEX Cloud (free tier). Show more Show less
Posted 3 days ago
3.0 - 4.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Job Title : Data Scientist - Computer Vision & Generative AI. Location : Mumbai. Experience Level : 3 to 4 years. Employment Type : Full-time. Industry : Renewable Energy / Solar Services. Job Overview We are seeking a talented and motivated Data Scientist with a strong focus on computer vision, generative AI, and machine learning to join our growing team in the solar services sector. You will play a pivotal role in building AI-driven solutions that transform how solar infrastructure is analyzed, monitored, and optimized using image-based intelligence. From drone and satellite imagery to on-ground inspection photos, your work will enable intelligent automation, predictive analytics, and visual understanding in critical areas like fault detection, panel degradation, site monitoring, and more. If you're passionate about working at the cutting edge of AI for real-world sustainability impact, we'd love to hear from you. Key Responsibilities Design, develop, and deploy computer vision models for tasks such as object detection, classification, segmentation, anomaly detection, etc. Work with generative AI techniques (e.g. , GANs, diffusion models) to simulate environmental conditions, enhance datasets, or create synthetic training data. Build ML pipelines for end-to-end model training, validation, and deployment using Python and modern ML frameworks. Analyze drone, satellite, and on-site images to extract meaningful insights for solar panel performance, wear-and-tear detection, and layout optimization. Collaborate with cross-functional teams (engineering, field ops, product) to understand business needs and translate them into scalable AI solutions. Continuously experiment with the latest models, frameworks, and techniques to improve model performance and robustness. Optimize image pipelines for performance, scalability, and edge/cloud deployment. Key Requirements 3-4 years of hands-on experience in data science, with a strong portfolio of computer vision and ML projects. Proven expertise in Python and common data science libraries : NumPy, Pandas, Scikit-learn, etc. Proficiency with image-based AI frameworks : OpenCV, PyTorch or TensorFlow, Detectron2, YOLOv5/v8, MMDetection, etc. Experience with generative AI models like GANs, Stable Diffusion, or ControlNet for image generation or augmentation. Experience building and deploying ML models using MLflow, TorchServe, or TensorFlow Serving. Familiarity with image annotation tools (e.g. , CVAT, Labelbox), and data versioning tools (e.g. , DVC). Experience with cloud platforms (AWS, GCP, or Azure) for storage, training, or model deployment. Experience with Docker, Git, and CI/CD pipelines for reproducible ML workflows. Ability to write clean, modular code and a solid understanding of software engineering best practices in AI/ML projects. Strong problem-solving skills, curiosity, and ability to work independently in a fast-paced environment. Bonus / Preferred Skills Experience with remote sensing and working with satellite or drone imagery. Exposure to MLOps practices and tools like Kubeflow, Airflow, or SageMaker Pipelines. Knowledge of solar technologies, photovoltaic systems, or renewable energy is a plus. Familiarity with edge computing for vision applications on IoT devices or drones. (ref:hirist.tech) Show more Show less
Posted 3 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Title : AI/ML Engineer. Company : Cyfuture India Pvt.Ltd. Industry : IT Services and IT Consulting. Location : Sector 81, NSEZ, Noida (5 Days Work From Office). About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise Cloud Computing & Deployment : Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments. Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Machine Learning & Deep Learning Strong command of frameworks : TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing : Apache Spark, Dask, Ray. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. Resource Optimization Efficient use of compute resources : GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. (ref:hirist.tech) Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
E2M is not your regular digital marketing firm. We're an equal opportunity provider, founded on strong business ethics and driven by more than 300 experienced professionals. Our client base is made up of digital agencies that rely on us to solve bandwidth issues, reduce overheads, and boost profitability. We need driven, tech-savvy professionals like you to help us deliver next-gen solutions. If you're someone who dreams big and thrives in innovation, E2M has a place for you. Role Overview As an Python Developer – AI Implementation Specialist/AI Executor , you will be responsible for designing and integrating AI capabilities into production systems using Python and key ML libraries. This role requires a strong backend development foundation and a proven track record of deploying AI use cases using tools like TensorFlow, Keras, or OpenAI APIs. You'll work cross-functionally to deliver scalable AI-driven solutions. Key Responsibilities Design and develop backend solutions using Python, with a focus on AI-driven features. Implement and integrate AI/ML models using tools like OpenAI, Hugging Face, or Lang Chain. Use core Python libraries (NumPy, Pandas, TensorFlow, Keras) to process data, train, or implement models. Translate business needs into AI use cases and deliver working solutions. Collaborate with product, engineering, and data teams to define integration workflows. Develop REST APIs and micro services to deploy AI components within applications. Maintain and optimize AI systems for scalability, performance, and reliability. Keep pace with advancements in the AI/ML landscape and evaluate tools for continuous improvement. Required Skills & Qualifications Minimum 5+ years of overall experience, including at least 1 year in AI/ML integration and strong hands-on expertise in Python for backend development. Proficiency in libraries such as NumPy, Pandas, TensorFlow, and Keras Practical exposure to AI platforms/APIs (e.g., OpenAI, LangChain, Hugging Face) Solid understanding of REST APIs, microservices, and integration practices Ability to work independently in a remote setup with strong communication and ownership Excellent problem-solving and debugging capabilities Show more Show less
Posted 3 days ago
0.0 - 1.0 years
0 - 0 Lacs
Tirupati
Remote
About the RoleWe are looking for a passionate and knowledgeable Python & Data Science Instructor to teach and mentor students in our [online / in-person] data science program. You’ll play a key role in delivering engaging lessons, guiding hands-on projects, and supporting learners as they build real-world skills in Python programming and data science. This is a great opportunity for professionals who love teaching and want to empower the next generation of data scientists. 📚 ResponsibilitiesTeach core topics including: Python fundamentals Data manipulation with pandas and NumPy Data visualization using matplotlib/seaborn/plotly Machine learning with scikit-learn Jupyter Notebooks, data cleaning, and exploratory data analysis Deliver live or recorded lectures, tutorials, and interactive sessions Review and provide feedback on student projects and assignments Support students via Q&A sessions, forums, or 1-on-1 mentoring Collaborate with curriculum designers to refine and improve content Stay updated with the latest industry trends and tools ✅ RequirementsStrong proficiency in Python and the data science stack (NumPy, pandas, matplotlib, scikit-learn, etc.) Hands-on experience with real-world data projects or industry experience in data science Prior teaching, mentoring, or public speaking experience (formal or informal) Clear communication and ability to explain complex topics to beginners [Bachelor’s/Master’s/PhD] in Computer Science, Data Science, Statistics, or a related field (preferred but not required) ⭐ Bonus PointsExperience with deep learning frameworks (TensorFlow, PyTorch) Familiarity with cloud platforms (AWS, GCP, Azure) Experience teaching online using tools like Zoom, Slack, or LMS platforms Contribution to open-source, GitHub portfolio, or Kaggle participation 🚀 What We OfferFlexible working hours and remote-friendly environment Opportunity to impact learners around the world Supportive and innovative team culture Competitive pay and performance incentives
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Machine Learning Engineer In this role, you’ll be driving and embedding the deployment, automation, maintenance and monitoring of machine learning models and algorithms Day-to-day, you’ll make sure that models and algorithms work effectively in a production environment while promoting data literacy education with business stakeholders If you see opportunities where others see challenges, you’ll find that this solutions-driven role will be your chance to solve new problems and enjoy excellent career development What you’ll do Your daily responsibilities will include you collaborating with colleagues to design and develop advanced machine learning products which power our group for our customers. You’ll also codify and automate complex machine learning model productions, including pipeline optimisation. We’ll expect you to transform advanced data science prototypes and apply machine learning algorithms and tools. You’ll also plan, manage, and deliver larger or complex projects, involving a variety of colleagues and teams across our business. You’ll Also Be Responsible For Understanding the complex requirements and needs of business stakeholders, developing good relationships and how machine learning solutions can support our business strategy Working with colleagues to productionise machine learning models, including pipeline design and development and testing and deployment, so the original intent is carried over to production Creating frameworks to ensure robust monitoring of machine learning models within a production environment, making sure they deliver quality and performance Understanding and addressing any shortfalls, for instance, through retraining Leading direct reports and wider teams in an Agile way within multi-disciplinary data and analytics teams to achieve agreed project and Scrum outcomes The skills you’ll need To be successful in this role, you’ll need to have a good academic background in a STEM discipline, such as Mathematics, Physics, Engineering or Computer Science. You’ll also have the ability to use data to solve business problems, from hypotheses through to resolution. We’ll look to you to have experience of at least twelve years with machine learning on large datasets, as well as experience building, testing, supporting, and deploying advanced machine learning models into a production environment using modern CI/CD tools, including git, TeamCity and CodeDeploy. You’ll Also Need A good understanding of machine learning approaches and algorithms such as supervised or unsupervised learning, deep learning, NLP with a strong focus on model development, deployment, and optimization Experience using Python with libraries such as NumPy, Pandas, Scikit-learn, and TensorFlow or PyTorch An understanding of PySpark for distributed data processing and manipulation with AWS (Amazon Web Services) including EC2, S3, Lambda, SageMaker, and other cloud tools. Experience with data processing frameworks such as Apache Kafka, Apache Airflow and containerization technologies such as Docker and orchestration tools such as Kubernetes Experience of building GenAI solutions to automate workflows to improve productivity and efficiency Show more Show less
Posted 3 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
PYTHON DEVELOPER Key Responsibilities: Your primary focus will be to develop, test, and maintain automation scripts that support Cyber Security Advisory at Ontinue. Working collaboratively with engineers and security specialists, you will help identify areas where automation can enhance efficiency, reduce manual effort, and enhance the customer experience. Beyond writing scripts, you will also be responsible for debugging and troubleshooting automation issues, ensuring that all code adheres to security best practices and industry standards. Maintaining comprehensive documentation will be a key part of your role, ensuring that workflows, processes, and automation scripts are well-documented for future reference and scalability. Staying up to date with industry trends and new automation technologies will be essential. You will be encouraged to bring fresh ideas and innovative solutions that contribute to the ongoing evolution of our platform, ensuring that Ontinue remains at the forefront of MDR innovation. Work Location & Schedule: This role can be remote or based in our Noida office. You must be available for late shifts at least two days per week to collaborate effectively with the head of Cyber Advisory USA (US – Central Time) and the US-based team. Additional late shifts may be required based on project needs. Key Responsibilities: Develop, test, and maintain automation scripts in Python to optimize and enhance the ION MDR Platform. Collaborate with security analysts, engineers, and stakeholders to identify automation opportunities and improve operational efficiency. Write clean, maintainable, and efficient Python code, following industry best practices. Debug and troubleshoot automation scripts, ensuring reliability and performance. Document scripts, workflows, and automation processes for future reference and knowledge sharing. Ensure that automation scripts follow security best practices, adhering to industry standards and compliance requirements. Stay up to date with emerging automation technologies and best practices, bringing innovative ideas to the team. Qualifications & Experience: We are looking for a Python developer with a strong background in automation, who has at least three years of hands-on experience working with Python in a security or operational automation environment. You Should Have Experience With: Cloud platforms such as Azure and Microsoft Graph API. Familiarity with SIEM, SOAR, and security automation tools. CI/CD pipelines and version control tools like Git, GitHub, or GitLab. RESTful APIs and integrating them into automation workflows. Data structures and algorithms for efficient automation processes. Willing to start later and finish later to work with the US time zone-based team Preferred Skills & Competencies: While not mandatory, experience with the following is highly desirable: Data analysis tools like Pandas or NumPy to process security-related data. Python automation frameworks such as Selenium, PyAutoGUI, etc. Networking fundamentals and system administration to support security automation tasks. Additional scripting languages such as Bash or PowerShell for extended automation capabilities. What We Offer: We have been recognized as a TOP place to work! In addition to a competitive salary, we also offer great benefits including 18 days off a year, an annual subscription to Headspace, recognition awards, anniversary rewards, monthly phone allowance and access to management and Microsoft training. Come as you are! We search for amazing people of diverse backgrounds, experiences, abilities, and perspectives. Ontinue welcomes and encourages diversity in the workplace regardless of race, gender, religion, age, sexual orientation, disability, or veteran status. Next Steps: If you have the skills and experience required and feel that Ontinue is a place you can belong to, we would love to get to know you better! Please drop an application for this role and our talent acquisition manager will be in touch to discuss further. Learn More: www.ontinue.com. Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Location: Remote/Hybrid (India-based preferred) Type: Full-Time Must Haves (Don’t Apply If You Miss Any) 3+ years experience in Data Engineering Proven hands-on with ETL pipelines (end-to-end ownership) AWS Resources: Deep experience with EC2, Athena, Lambda, Step Functions (non-negotiable; critical to the role) Strong with MySQL (not negotiable) Docker (setup, deployment, troubleshooting) Good To Have (Adds Major Value) Airflow or any modern orchestration tool PySpark experience Python Ecosystem SQL Alchemy DuckDB PyArrow Pandas Numpy DLT (Data Load Tool). About You You’re a builder, not just a maintainer. You can work independently but communicate crisply. You thrive in fast-moving, startup environments. You care about ownership and impact, not just code. Include the Code word Red Panda in your message application, so that we know you have read this section. What You’ll Do Architect, build, and optimize robust data pipelines and workflows Own AWS resource configuration, optimization, and troubleshooting Collaborate with product and engineering teams to deliver business impact fast Automate and scale data processes—no manual work culture Shape the data foundation for real business decisions Cut to the chase. Only serious, relevant applicants will be considered. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Overview Viraaj HR Solutions is dedicated to connecting talented professionals with dynamic companies in India. Our goal is to streamline the hiring process and provide exceptional staffing solutions tailored to our clients' needs. At Viraaj HR Solutions, we prioritize building lasting relationships and ensuring a diverse workforce. We foster a culture of collaboration, innovation, and integrity, aiming to make a positive impact in the recruitment industry. Role Responsibilities Develop and maintain Python scripts for automation tasks. Create scripts for data analysis and reporting. Collaborate with the development team to integrate Python applications with existing systems. Implement API integrations for external data sources. Conduct unit testing and debugging of scripts to ensure optimal functionality. Optimize the performance of existing scripts for efficiency and reliability. Assist in database management and data retrieval using Python. Document code changes and maintain version control using Git. Monitor and troubleshoot scripting issues as they arise. Participate in code reviews and provide constructive feedback. Stay updated with industry trends and emerging technologies. Train and mentor junior developers in Python scripting best practices. Collaborate with stakeholders to gather requirements for new scripting projects. Ensure compliance with company coding standards and guidelines. Prepare technical documentation for developed scripts and systems. Provide technical support for deployed scripts and applications. Qualifications Bachelor's degree in Computer Science or a related field. Proven experience in Python scripting and programming. Strong understanding of scripting languages and frameworks. Experience with data manipulation and analysis libraries (e.g., Pandas, NumPy). Familiarity with debugging tools and practices. Knowledge of version control systems, particularly Git. Experience with APIs and web services integration. Proficiency in database management (SQL or NoSQL). Excellent problem-solving skills and analytical thinking. Ability to work independently and as part of a team. Strong communication skills and attention to detail. Previous experience with system automation is an advantage. Knowledge of code optimization techniques. Experience in mentoring or training junior staff is a plus. Familiarity with Agile/Scrum methodologies. Willingness to learn and adapt to new technologies. Skills: data manipulation,api integration,data analysis,problem-solving,agile/scrum methodologies,database management,scripting,version control (git),version control,unit testing,data manipulation (pandas, numpy),python scripting,system automation,scripting languages,automation,python Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Summary Fiche de poste : UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 3 days ago
3.0 - 5.0 years
8 - 10 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Strong in Python and experience with Jupyter notebooks, Python packages like polars, pandas, numpy, scikit-learn, matplotlib, etc. Must have: Experience with machine learning lifecycle, including data preparation, training, evaluation, and deployment Must have: Hands-on experience with GCP services for ML & data science Must have: Experience with Vector Search and Hybrid Search techniques Must have: Experience with embeddings generation using models like BERT, Sentence Transformers, or custom models Must have: Experience in embedding indexing and retrieval (e.g., Elastic, FAISS, ScaNN, Annoy) Must have: Experience with LLMs and use cases like RAG (Retrieval-Augmented Generation) Must have: Understanding of semantic vs lexical search paradigms Must have: Experience with Learning to Rank (LTR) techniques and libraries (e.g., XGBoost, LightGBM with LTR support) Should be proficient in SQL and BigQuery for analytics and feature generation Should have experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark Should have experience deploying models and services using Vertex AI, Cloud Run, or Cloud Functions Should be comfortable working with BM25 ranking (via Elasticsearch or OpenSearch) and blending with vector-based approaches Good to have: Familiarity with Vertex AI Matching Engine for scalable vector retrieval Good to have: Familiarity with TensorFlow Hub, Hugging Face, or other model repositories Good to have: Experience with prompt engineering, context windowing, and embedding optimization for LLM-based systems Should understand how to build end-to-end ML pipelines for search and ranking applications Must have: Awareness of evaluation metrics for search relevance (e.g., precision@k, recall, nDCG, MRR) Should have exposure to CI/CD pipelines and model versioning practices GCP Tools Experience: ML & AI: Vertex AI, Vertex AI Matching Engine, AutoML, AI Platform Storage: BigQuery, Cloud Storage, Firestore Ingestion: Pub/Sub, Cloud Functions, Cloud Run Search: Vector Databases (e.g., Matching Engine, Qdrant on GKE), Elasticsearch/OpenSearch Compute: Cloud Run, Cloud Functions, Vertex Pipelines, Cloud Dataproc (Spark/PySpark) CI/CD & IaC: GitLab/GitHub Actions Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Ops Capability Deployment - Analyst is a seasoned professional role. Applies in-depth disciplinary knowledge, contributing to the development of new solutions/frameworks/techniques and the improvement of processes and workflow for Enterprise Data function. Integrates subject matter and industry expertise within a defined area. Requires in-depth understanding of how areas collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the function and overall business. The primary purpose of this role is to perform data analytics and data analysis across different asset classes, and to build data science/Tooling capabilities within the team. This will involve working closely with the wider Enterprise Data team, in particular the front to back leads to deliver business priorities. The following role is within B & I Data Capabilities team within the Enterprise Data. The team manages the Data quality/Metrics/Controls program in addition to a broad remit to implement and embed improved data governance and data management practices throughout the region. The Data quality program is centered on enhancing Citi’s approach to data risk and addressing regulatory commitments in this area. Key Responsibilities: Hands on with data engineering background and have thorough understanding of Distributed Data platforms and Cloud services. Sound understanding of data architecture and data integration with enterprise applications Research and evaluate new data technologies, data mesh architecture and self-service data platforms Work closely with Enterprise Architecture Team on the definition and refinement of overall data strategy Should be able to address performance bottlenecks, design batch orchestrations, and deliver Reporting capabilities. Ability to perform complex data analytics (data cleansing, transformation, joins, aggregation etc.) on large complex datasets. Build analytics dashboards & data science capabilities for Enterprise Data platforms. Communicate complicated findings and propose solutions to a variety of stakeholders. Understanding business and functional requirements provided by business analysts and convert into technical design documents. Work closely with cross-functional teams e.g. Business Analysis, Product Assurance, Platforms and Infrastructure, Business Office, Control and Production Support. Prepare handover documents and manage SIT, UAT and Implementation. Demonstrate an in-depth understanding of how the development function integrates within overall business/technology to achieve objectives; requires a good understanding of the banking industry. Performs other duties and functions as assigned. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Skills & Qualifications 10 + years of active development background and experience in Financial Services or Finance IT is a required. Experience with Data Quality/Data Tracing/Data Lineage/Metadata Management Tools Hands on experience for ETL using PySpark on distributed platforms along with data ingestion, Spark optimization, resource utilization, capacity planning & batch orchestration. In depth understanding of Hive, HDFS, Airflow, job scheduler Strong programming skills in Python with experience in data manipulation and analysis libraries (Pandas, Numpy) Should be able to write complex SQL/Stored Procs Should have worked on DevOps, Jenkins/Lightspeed, Git, CoPilot. Strong knowledge in one or more of the BI visualization tools such as Tableau, PowerBI. Proven experience in implementing Datalake/Datawarehouse for enterprise use cases. Exposure to analytical tools and AI/ML is desired. Education: Bachelor's/University degree, master's degree in information systems, Business Analysis / Computer Science. ------------------------------------------------------ Job Family Group: Data Governance ------------------------------------------------------ Job Family: Data Governance Foundation ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2