Jobs
Interviews

4556 Numpy Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

3 - 4 Lacs

Chennai

On-site

Job Title: Freelance Trainer - Data Science & Artificial Intelligence Job Location: Chennai Job Mode: Offline - Freelance Job Description: We are seeking a passionate and knowledgeable Data Science & AI Trainer to deliver hands-on training sessions to college students. The ideal candidate should have a strong foundation in data science, machine learning, and AI concepts, with the ability to simplify complex topics and engage learners through practical examples and projects. Responsibilities: Deliver in-depth training on Data Science, Machine Learning, Deep Learning , and AI tools and techniques. Develop or follow structured lesson plans, course materials, and lab exercises. Conduct hands-on coding sessions using Python and related libraries (Numpy, Pandas, Scikit-learn, Tensor Flow/PyTorch, etc.). Provide individual support, project guidance, and real-world application insights to students. Stay updated with the latest trends and advancements in AI & Data Science. Maintain a professional and supportive learning environment in the classroom. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, or related fields. Proven teaching/training experience in Data Science and AI . Strong programming skills in Python . Familiarity with data visualization tools (e.g., Matplotlib, Seaborn, Power BI, Tableau – optional). Good communication and presentation skills in English (Tamil proficiency is a plus). Ability to explain technical concepts in a simplified and engaging manner. Job Type: Freelance Contract length: 3 months Pay: ₹30,000.00 - ₹40,000.00 per month Experience: Data Science & AI Trainer: 2 years (Required) Location: Chennai, Tamil Nadu (Required) Willingness to travel: 25% (Preferred) Work Location: In person Application Deadline: 29/07/2025 Expected Start Date: 29/07/2025

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai

On-site

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for planning and designing new software and web applications. Edits new and existing applications. Implements, testing and debugging defined software components. Documents all development activity. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities Job Title Python Engineer 2 Job Summary: As part of the SPIDER team, the Python Engineer will be responsible for building multi-tier client-server applications, data processing, deployment on cloud-based technologies, and continuous improvement of existing solutions. Responsibilities: Develop and deploy applications on cloud-based environments following the full lifecycle of software development. Maintaining and continuous improvement of existing applications and solutions. Writing quality and efficient code following coding and security principles. Implement solutions based on project requirements and technical specifications. Identify technology and design issues and provide proactive communication. Minimum Requirements: Degree in Computer Science, or equivalent professional experience. 3 years of experience in Python Programming. 3+ years of relevant experience with Application development and deployment in cloud. Solid understanding of REST API's development and integration. Work experience with Python libraries such as Pandas, SciPy, NumPy, etc. Working experience in any one of the SQL or NoSQL databases. Experience with application deployment on cloud-bases environments such as AWS, etc. Experience with version control, Git preferred. Knowledge in OOP and modular application development and documentation. Good Problem Solving and debugging skills. Possess the ability to learn and work independently, along with strong communication, and a strong work ethic. Good to have: Experience with different distributions of Linux. Experience in Spark. Experience with container technologies such as Docker. Experience with CICD tools such as Concourse, Terraform. Disclaimer: This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai

On-site

Mandatory Skills: 4-6 years of exp with basic proficiency in Python, SQL and familiarity with libraries like NumPy or Pandas. Understanding of fundamental programming concepts (data structures, algorithms, etc.). Eagerness to learn new tools and frameworks, including Generative AI technologies. Familiarity with version control systems (e.g., Git). Strong problem-solving skills and attention to detail. Exposure to data processing tools like Apache Spark or PySpark, SQL. Basic understanding of APIs and how to integrate them. Interest in AI/ML and willingness to explore frameworks like LangChain. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Job Description: We are seeking a motivated Python Developer to join our team. The ideal candidate will have a foundational understanding of Python programming, SQL and a passion for learning and growing in the field of software development. You will work closely with senior developers and contribute to building and maintaining applications, with opportunities to explore Generative AI frameworks and data processing tools. Key Responsibilities: Assist in developing and maintaining Python-based applications. Write clean, efficient, and well-documented code. Collaborate with senior developers to integrate APIs and frameworks. Support data processing tasks using libraries like Pandas or PySpark. Learn and work with Generative AI frameworks (e.g., LangChain, LangGraph) under guidance. Debug and troubleshoot issues in existing applications. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

2.0 years

4 - 6 Lacs

Ahmedabad

On-site

Why Glasier Inc. For Your Dream Job? We're a passionate group of tech enthusiasts & creatives who live and breathe innovation. We're looking for energetic innovators, thinkers, and doers who thrive on learning, adapting quickly, and executing in real-time if you're a creative thinker with design, a marketer with a story to tell, or a passionate professional. Apply Now 01 Python Developer Openings: 01 Exp.: 2 - 2.5 Years Job Description: Design, build, and deploy ML models and algorithms. Preprocess and analyze large datasets for training and evaluation. Work with data scientists and engineers to integrate models into applications. Optimize model performance and accuracy. Stay up to date with AI/ML trends, libraries, and tools. Strong experience with Python and libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, or PyTorch. Solid understanding of machine learning algorithms and principles. Experience working with data preprocessing and model deployment. Familiarity with cloud platforms (AWS, GCP, Azure) is a plus. PERKS & BENEFITS Working With Glasier Inc. We take care of our team members, so they can deliver their best work. Here are a few of the benefits and perks we offer to our employees: 5 Days Working Per Week Mentorship Mindfulness Flexible Working Hours International Exposures Dedicated Pantry Area Free Snacks & Drinks Open Work Culture Competitive Salary And Benefits Festival, Birthday & Work Anniversary Celebration Performance Appreciation Bonus & Rewards Employee Friendly Leave Policies Join our team now Send us an email hr@glasierinc.com Whats app on +91 95102 61901 Call at +91 95102 61901

Posted 1 week ago

Apply

1.0 years

0 Lacs

Gwalior

On-site

Job Title: Data Science Intern Company: Techieshubhdeep IT Solutions Pvt. Ltd. Location: 21 Nehru Colony, Thatipur, Gwalior, Madhya Pradesh Contact: +91 7880068399 About Us: Techieshubhdeep IT Solutions Pvt. Ltd. is a growing technology company specializing in IT services, software development, and innovative digital solutions. We are committed to nurturing talent and providing a platform for aspiring professionals to learn and excel in their careers. Role Overview: We are seeking a Data Science Intern who will assist our team in developing data-driven solutions, performing statistical analysis, and creating machine learning models to solve real-world business challenges. Key Responsibilities: Collect, clean, and preprocess structured and unstructured data. Perform exploratory data analysis (EDA) to identify trends and patterns. Assist in building, testing, and optimizing machine learning models. Work with large datasets and perform statistical modeling. Document processes, findings, and model performance. Collaborate with senior data scientists and software engineers on live projects. Required Skills & Qualifications: Currently pursuing or recently completed a degree in Computer Science, Data Science, Statistics, Mathematics, or related fields. Basic understanding of Python/R and libraries like NumPy, Pandas, Scikit-learn, Matplotlib, etc. Familiarity with SQL and database management. Strong analytical skills and problem-solving abilities. Good communication skills and willingness to learn. What We Offer: Hands-on training on real-world projects. Guidance from experienced industry professionals. Internship certificate upon successful completion. Potential for full-time employment based on performance. Job Types: Full-time, Internship, Fresher, Walk-In Pay: ₹5,000.00 - ₹15,000.00 per year Schedule: Day shift Monday to Friday Morning shift Ability to commute/relocate: Gwalior, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Experience: total work: 1 year (Preferred) Data science: 1 year (Preferred) Language: Hindi (Preferred) English (Preferred) Work Location: In person

Posted 1 week ago

Apply

5.0 years

30 - 32 Lacs

Greater Hyderabad Area

On-site

Experience : 5.00 + years Salary : INR 3000000-3200000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Hyderabad) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: InfraCloud Technologies Pvt Ltd) (*Note: This is a requirement for one of Uplers' client - IF) What do you need for this opportunity? Must have skills required: Banking, Fintech, Product Engineering background, Python, FastAPI, Django, Machine learning (ML) IF is Looking for: Product Engineer Location: Narsingi, Hyderabad 5 days of work from the Office Client is a Payment gateway processing company Interview Process: Screening round with InfraCloud, followed by a second round with our Director of Engineering. We share the profile with the client, and they take one/two interviews About The Project We are building a high-performance machine learning engineering platform that powers scalable, data-driven solutions for enterprise environments. Your expertise in Python, performance optimization, and ML tooling will play a key role in shaping intelligent systems for data science and analytics use cases. Experience with MLOps, SaaS products, or big data environments will be a strong plus. Role And Responsibilities Design, build, and optimize components of the ML engineering pipeline for scalability and performance. Work closely with data scientists and platform engineers to enable seamless deployment and monitoring of ML models. Implement robust workflows using modern ML tooling such as Feast, Kubeflow, and MLflow. Collaborate with cross-functional teams to design and scale end-to-end ML services across a cloud-native infrastructure. Leverage frameworks like NumPy, Pandas, and distributed compute environments to manage large-scale data transformations. Continuously improve model deployment pipelines for reliability, monitoring, and automation. Requirements 5+ years of hands-on experience in Python programming with a strong focus on performance tuning and optimization. Solid knowledge of ML engineering principles and deployment best practices. Experience with Feast, Kubeflow, MLflow, or similar tools. Deep understanding of NumPy, Pandas, and data processing workflows. Exposure to big data environments and a good grasp of data science model workflows. Strong analytical and problem-solving skills with attention to detail. Comfortable working in fast-paced, agile environments with frequent cross-functional collaboration. Excellent communication and collaboration skills. Nice to Have Experience deploying ML workloads in public cloud environments (AWS, GCP, or Azure). Familiarity with containerization technologies like Docker and orchestration using Kubernetes. Exposure to CI/CD pipelines, serverless frameworks, and modern cloud-native stacks. Understanding of data protection, governance, or security aspects in ML pipelines. Experience Required: 5+ years How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 - 8.0 years

8 - 18 Lacs

Bengaluru

Hybrid

Dear Candidate, One of the MNC is Hiring for Python Backend developer Role: Python Backend developer Skill : Python, Pandas, Py Torch, NumPy, Opps Concepts Experience 5 to 8 Years Notice : Immediate to 15 Days Location : Bangalore Mode of work: Hybrid Employment type : Fulltime Please find the below JD: About the role: This role will report to the R&D manager. You will work in collaboration with a global team of stakeholders from Product Owner, Local Team Leader, data scientist, Software program manager. We are looking for an Edge Software Developer to produce scalable software solutions for Edge products that communicates with the Cloud. You will be part of a cross-technical team thats responsible for the full software development life cycle, from conception to deployment. As a Developer, you should be comfortable with coding languages, development frameworks and design structures. You should also be a team player with interest for software quality and test strategy. The project is following Agile methodologies and Scrum of Scrum framework. Your Responsibilities: Work with development teams and product managers to ideate Edge software solutions, in an Agile scale environment Design Edge and cloud interface architectures Develop and manage well-functioning databases and applications CI/CD Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Create security and data protection settings Review technical documentation Work with architect and product owner to improve software Mentor and guide the development team Provide technical expertise to the team on all software topics Your profile In-depth knowledge of Python software development, including frameworks, tools, and systems (e.g., NumPy, Pandas, SciPy, PyTorch) Experience with developing back-end components and ensuring seamless integration with other services and applications Strong problem-solving skills and ability to work in a collaborative environment Qualifications: Bachelors degree in computer science or a related field 5+ years of experience as a Python developer Interested candidates kindly reach out to pavan.m@kanarystaffing.com

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Experience: 3–4 years Location: On-site — Gurgaon, India Employment Type: Full-time About tracebloc tracebloc is a Berlin-based AI startup building tooling for data scientist, allowing them to evaluate and benchmark third-party AI models without the need to expose their data. We have recently received $2,5m in funding and are aiming to build the category leader in AI model discovery. About the Role We are seeking a capable and self-driven Junior ML Engineer with 3–4 years of experience and hands-on experience in building and deploying production-grade machine learning systems. This role is pipeline-centric and ideal for individuals with at least 1 year of core ML/DL engineering experience , including experience in ML pipelines, workflow orchestration, and production deployment . You will be independently responsible for architecting, developing, and maintaining a scalable ML platform — not just individual models — enabling robust and reusable workflows. This is a full-time, on-site role based in Gurgaon . How to apply: To help us better understand your hands-on capabilities, please include the following in your application: A link to your Git repository (e.g., GitHub, GitLab) showcasing ML pipelines or deployment code you've worked on. If you don’t have a public repo , you can share sample code demonstrating your skills, or submit a detailed project write-up describing the architecture, workflow, and your contributions. A short Loom video (around 3 minutes) or screen recording explaining a project you’ve worked on . Walk us through your codebase or workflow, explaining: The structure of the solution Design decisions and trade-offs Technologies used Please send your application to info@tracebloc.io , divyasingh@tracebloc.io , shujaat@tracebloc.io. Key Responsibilities Build and manage end-to-end ML pipelines (data prep, training, deployment, monitoring) Deploy ML systems on cloud (AWS/Azure) using Docker/Kubernetes Create reusable components to support multiple ML workflows Write clean, testable Python code for production Implement CI/CD for ML workflows. Monitor and improve deployed models Required Skills Strong hands-on experience in Python, with proficiency in ML libraries such as scikit-learn, pandas, NumPy, PyTorch, tensorflow. Experience in building end-to-end ML pipelines (not notebooks or isolated scripts). Deep understanding of pipeline design patterns and best practices for production environments. At least one year e xperience in building ML Pipelines for Computer Vision and NLP tasks . Good understanding of production best practices: versioning, automation, monitoring. Good to have Skills Hands-on experience with AWS and/or Azure cloud services for data science workloads Understanding and experience with Kubernetes and Docker Experience setting up and maintaining CI/CD pipelines for ML deployments. Ability to write and maintain unit tests, integration tests, and validation tests for ML pipelines and APIs Prior work on platform architecture for multi-tenant ML workflows

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)

Work from Office

Role & responsibilities BI Product Development using open source framework Apache Superset hence check for their experience in any BI solution or best would be experience in Apache superset , They should have done multiple project on BI. The role need advanced python experience and Advanced SQL knowledge Hence check for their experience using Python and SQL for product development OR in project delivery , What was the use case which required advanced python experience and Advanced SQL knowledge Should be able to perform Advanced analytics like Fraud Analytic / Prediction and Forecasting . Check if they have any such advance analytics experience. Client Management experience for managing BI project. Preferred candidate profile

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager Purpose of the role This role sits at the intersection of data science and revenue growth strategy, focused on developing advanced analytical solutions to optimize pricing, trade promotions, and product mix. The candidate will lead the end-to-end design, deployment, and automation of machine learning models and statistical frameworks that support commercial decision-making, predictive scenario planning, and real-time performance tracking. By leveraging internal and external data sources—including transactional, market, and customer-level data—this role will deliver insights into price elasticity, promotional lift, channel efficiency, and category dynamics. The goal is to drive measurable improvements in gross margin, ROI on trade spend, and volume growth through data-informed strategies. Key tasks & accountabilities Design and implement price elasticity models using linear regression, log-log models, and hierarchical Bayesian frameworks to understand consumer response to pricing changes across channels and segments. Build uplift models (e.g., Linear Regression, XGBoost for treatment effect) to evaluate promotional effectiveness and isolate true incremental sales vs. base volume. Develop demand forecasting models using ARIMA, SARIMAX, and Prophet, integrating external factors such as seasonality, promotions, and competitor activity, time-series clustering and k-means segmentation to group SKUs, customers, and geographies for targeted pricing and promotion strategies. Construct assortment optimization models using conjoint analysis, choice modeling, and market basket analysis to support category planning and shelf optimization. Use Monte Carlo simulations and what-if scenario modeling to assess revenue impact under varying pricing, promo, and mix conditions. Conduct hypothesis testing (t-tests, ANOVA, chi-square) to evaluate statistical significance of pricing and promotional changes. Create LTV (lifetime value) and customer churn models to prioritize trade investment decisions and drive customer retention strategies. Integrate Nielsen, IRI, and internal POS data to build unified datasets for modeling and advanced analytics in SQL, Python (pandas, statsmodels, scikit-learn), and Azure Databricks environments. Automate reporting processes and real-time dashboards for price pack architecture (PPA), promotion performance tracking, and margin simulation using advanced Excel and Python. Lead post-event analytics using pre/post experimental designs, including difference-in-differences (DiD) methods to evaluate business interventions. Collaborate with Revenue Management, Finance, and Sales leaders to convert insights into pricing corridors, discount policies, and promotional guardrails. Translate complex statistical outputs into clear, executive-ready insights with actionable recommendations for business impact. Continuously refine model performance through feature engineering, model validation, and hyperparameter tuning to ensure accuracy and scalability. Provide mentorship to junior analysts, enhancing their skills in modeling, statistics, and commercial storytelling. Maintain documentation of model assumptions, business rules, and statistical parameters to ensure transparency and reproducibility. Other Competencies Required Presentation Skills: Effectively presenting findings and insights to stakeholders and senior leadership to drive informed decision-making. Collaboration: Working closely with cross-functional teams, including marketing, sales, and product development, to implement insights-driven strategies. Continuous Improvement: Actively seeking opportunities to enhance reporting processes and insights generation to maintain relevance and impact in a dynamic market environment. Data Scope Management: Managing the scope of data analysis, ensuring it aligns with the business objectives and insights goals. Act as a steadfast advisor to leadership, offering expert guidance on harnessing data to drive business outcomes and optimize customer experience initiatives. Serve as a catalyst for change by advocating for data-driven decision-making and cultivating a culture of continuous improvement rooted in insights gleaned from analysis. Continuously evaluate and refine reporting processes to ensure the delivery of timely, relevant, and impactful insights to leadership stakeholders while fostering an environment of ownership, collaboration, and mentorship within the team. Business Environment Main Characteristics: Work closely with Zone Revenue Management teams. Work in a fast-paced environment. Provide proactive communication to the stakeholders. This is an offshore role and requires comfort with working in a virtual environment. GCC is referred to as the offshore location. The role requires working in a collaborative manner with Zone/country business heads and GCC commercial teams. Summarize insights and recommendations to be presented back to the business. Continuously improve, automate, and optimize the process. Geographical Scope: Europe 3. Qualifications, Experience, Skills Level of educational attainment required: Bachelor or Post-Graduate in the field of Business & Marketing, Engineering/Solution, or other equivalent degree or equivalent work experience. MBA/Engg. in a relevant technical field such as Marketing/Finance. Extensive experience solving business problems using quantitative approaches. Comfort with extracting, manipulating, and analyzing complex, high volume, high dimensionality data from varying sources. Previous Work Experience Required 5-8 years of experience in the Retail/CPG domain. Technical Skills Required Data Manipulation & Analysis: Advanced proficiency in SQL, Python (Pandas, NumPy), and Excel for structured data processing. Data Visualization: Expertise in Power BI and Tableau for building interactive dashboards and performance tracking tools. Modeling & Analytics: Hands-on experience with regression analysis, time series forecasting, and ML models using scikit-learn or XGBoost. Data Engineering Fundamentals: Knowledge of data pipelines, ETL processes, and integration of internal/external datasets for analytical readiness. Proficient in Python (pandas, scikit-learn, statsmodels), SQL, and Power BI. Skilled in regression, Bayesian modeling, uplift modeling, time-series forecasting (ARIMA, SARIMAX, Prophet), and clustering (k-means). Strong grasp of hypothesis testing, model validation, and scenario simulation. And above all of this, an undying love for beer! We dream big to create future with more cheers .

Posted 1 week ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Saket, Delhi, Delhi

On-site

We are hiring for the role of Data Science and AI Trainer. This presents a great opportunity for you to join our team and be part of our organization. About: Techstack Academy is a professional training institute based in Delhi (with a branch in Saket) that offers career-oriented courses in technology and digital skills. It’s designed for students, freshers, and working professionals who want to upskill or switch careers in fields like: :- Training Institute offering online and offline classes. :- Focused on practical, job-ready skills. :- Known for Digital Marketing, Data Science, AI, Machine Learning, Full Stack Development, Python, Web Design, etc. Requirements: :- Designation- Data Science and AI Trainer :- Experience - Atleast 1 year as Data Science and AI Trainer :- Good communication and interaction skills. :- Know Python, Adv. Python, NumPy pandas, Matplotlib, Seaborn, Beautiful Soup, Adv. Excel, Stats, Power BI, Tableau, ML, AI, Generative AI, Generative LLM, Deep Learning. Interested candidates can connect via: Email :- mitali@techstack.in Contact :- +91 94533 23222 Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹32,000.00 per month Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are a strategic thinker passionate about driving solutions in Automation. You have found the right team. As an automation Associate in our Finance team, you will spend each day defining, refining, and delivering set goals for our firm. You will be a key driver for a critical team that conducts process deep dives, reviews ideas, and designs, develops, and deploys scalable automation solutions by leveraging intelligent solutions. Your key focus will be to customize firm-wide intelligent automation capabilities to deploy cost-effective modules that impact execution velocity, enhance controls, and improve ROI. Partner with relevant stakeholders to understand process-related manual touchpoints, design future state, develop, test, and deploy. Manage and deliver E2E projects in adherence to the Hubs governance and execution model. Ensure automation implementation is compliant as per company policy. Collaborate with business, technology teams, controls partners to ensure calibrated delivery. Expert with hands-on experience in development (must have) - intelligent automation solutions: Python (selenium, django, pandas, numpy, win32com, tkinter, PDF/OCR libraries, exchange connections, API connectivity), UI Path (attended & unattended), Alteryx (advanced), and Pega (advanced). Advanced hands-on experience - Tableau, QlikView, Qlik sense & SharePoint. 5+ years experience in technology development, strong problem-solving abilities, project management, roadblock management & suctioning. Degree in Computer Science, Engineering, or any related field. Advanced knowledge of Microsoft Office with proficiency in MS Excel, MS Access & MS PowerPoint. Preferred qualifications, capabilities, and skills include Project Management Certification, ability to demonstrate innovation with the capability to translate concepts into visuals, Technical Designer / Solution Architect.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

The position of Data Scientist in the Fraud & Operational Analytics Team of the Data & Analytics department involves creating Machine Learning/AI solutions to enhance fraud prevention and improve operational scalability. Your primary focus will be on developing cutting-edge ML solutions to drive customer experience, reduce operational costs, and mitigate risks effectively. As a Data Scientist, your responsibilities will include analyzing extensive datasets to extract business insights, understanding various fraud patterns, developing advanced ML models for fraud detection, and innovating with a continuous emphasis on improving approaches. You will also be exploring additional data sources to enhance the effectiveness of traditional sources and exhibiting expertise in data strategy and model training pipelines. Moreover, you will be expected to drive insights for actionable data-driven strategies, provide analytical support to the business and risk teams through bespoke and strategic analysis, and possess proficiency in econometric, statistical, and machine learning techniques. Strong knowledge of Python, SQL, Pandas, NumPy, Sklearn, TensorFlow/PyTorch, and statistical concepts for regression, classification, and anomaly detection is essential for this role. To qualify for this position, you should hold a Bachelor's degree in Science, Technology, or Computer Applications for Graduation and a Master's degree in the respective fields for Post-Graduation. The ideal candidate will have 2 to 5 years of experience in the field of data science, demonstrating a logical thought process and the ability to translate complex problems into data-driven solutions effectively.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

delhi

On-site

We are seeking a proactive Computer Vision Engineer who excels in dynamic environments and has a passion for developing practical AI systems. If you have a keen interest in working with video, visual data, cutting-edge ML models, and resolving impactful challenges, we are eager to connect with you. This position merges deep learning, computer vision, and edge AI, focusing on constructing scalable models and intelligent systems to drive our advanced sports technology platform. Your responsibilities will include designing, training, and refining deep learning models for real-time object detection, tracking, and video comprehension. You will be responsible for implementing and deploying AI models utilizing frameworks such as PyTorch, TensorFlow/Keras, and Transformers. Working with video and image datasets using tools like OpenCV, YOLO, NumPy, Pandas, and visualization tools like Matplotlib will be a key aspect of your role. Collaborating closely with data engineers and edge teams to deploy models on real-time streaming pipelines will also be part of your duties. Additionally, you will need to optimize inference performance for edge devices such as Jetson and T4, and manage video ingestion workflows. You will also be expected to rapidly prototype new concepts, perform A/B tests, and validate enhancements in real-world scenarios. Clear documentation of processes, effective communication of findings, and contributing to the expansion of our AI knowledge base are essential aspects of this role. To be successful in this position, you should possess a strong command of Python and have familiarity with C/C++. Experience with deep learning frameworks such as PyTorch, TensorFlow, and Keras is required. A solid understanding of YOLO, Transformers, or OpenCV for real-time visual AI is essential. Proficiency in data preprocessing, feature engineering, and model evaluation using NumPy, Pandas, etc., is also necessary. A good grasp of computer vision, convolutional neural networks (CNNs), and object detection techniques is expected. Exposure to video streaming workflows like GStreamer, FFmpeg, and RTSP will be advantageous. The ability to write clean, modular, and efficient code is crucial for this role. Experience in deploying models in production, particularly on GPU/edge devices, is highly valued. An interest in reinforcement learning, sports analytics, or real-time systems will be considered a plus. An undergraduate degree in Computer Science, Artificial Intelligence, or a related field is required, while a Master's or PhD is preferred. A strong academic background will be beneficial for this position.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role We are looking for a Senior Data Scientist to lead the design and development of intelligent, data-driven systems that power talent discovery, candidate-job matching, and workforce insights. In this role, you'll build and deploy models that process unstructured data at scale, extract actionable insights, and deliver real-time recommendations to enhance decision-making across the talent lifecycle. If you're passionate about building smart systems that solve real-world problems in the intersection of data, people, and technology this is your calling. Key Responsibilities Build and optimize machine learning models for use cases like candidate-job matching, resume parsing, talent ranking, recommendation systems, and profile enrichment. Apply NLP techniques to extract and analyze insights from large volumes of unstructured data (resumes, job descriptions, interviews, etc.). Lead experimentation efforts including A/B testing, model comparisons, and business metric evaluation. Collaborate with product, engineering, and data teams to productionize ML pipelines and ensure model reliability at scale. Develop interpretable models and frameworks that deliver explainable AI (XAI) capabilities. Identify and evaluate data sources (internal & external) for enhancing model performance and domain coverage. Design and maintain systems to monitor model drift, retraining workflows, and feature performance in production. Must-Have Skills 5+ years of hands-on experience in data science or machine learning roles, ideally in search, recommendation, or NLP-focused systems. Strong proficiency in Python, including packages like scikit-learn, spaCy, transformers, pandas, NumPy, TensorFlow, or PyTorch. Experience with natural language processing (NLP), including named entity recognition, embeddings, text classification, and semantic search. Solid experience working with structured and unstructured data, and building scalable data pipelines. Strong command over SQL and experience with distributed data systems (e.g., Snowflake, BigQuery). Demonstrated experience taking models from experimentation to production, with clear metrics and monitoring in place. Excellent problem-solving and communication skills with the ability to work Skills : Knowledge of recommendation engines, graph-based models, or search ranking algorithms. Exposure to cloud platforms like AWS, Azure, or GCP, and MLOps workflows (CI/CD for models). Experience in talent tech, HR tech, or recruitment intelligence platforms is a strong plus. Familiarity with vector databases, embeddings, and semantic similarity search. (ref:hirist.tech)

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

indore, madhya pradesh

On-site

As an AI/ML Mentor at Engineer Sahab Education in Indore, you will play a crucial role in guiding and mentoring students to help them establish a solid understanding of Artificial Intelligence and Machine Learning. Your responsibilities will include conducting engaging sessions on various AI/ML concepts, assisting students in developing capstone projects, offering personalized mentorship, staying updated with the latest trends, and collaborating with the academic team to enhance learning outcomes. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, AI/ML, Data Science, or a related field, along with 1-3 years of experience in AI/ML development or teaching. Proficiency in Python and machine learning libraries like Scikit-learn, TensorFlow, Keras, Pandas, and NumPy is essential. Strong communication skills, a passion for mentoring, and experience in building and deploying ML models will be advantageous. At Engineer Sahab Education, you will work in a collaborative and student-centric environment, impacting the careers of future tech professionals. You will have access to continuous learning resources and upskilling opportunities, along with a competitive salary and room for growth. If you are enthusiastic about AI/ML and enjoy sharing knowledge, we encourage you to apply and be a part of our mission to shape the next generation of AI/ML professionals.,

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

PLSQL Developer - Pune Job Title : PLSQL Developer Experience : 5 to 7 years Location : Pune/hybrid Notice Period : Immediate to 15days Mandatory Skills Languages : SQL, T-SQL, PL/SQL, Python libraries(PySpark, Pandas, NumPy, Matplotlib, Seaborn) Roles & Responsibilities Design and maintain efficient data pipelines and ETL processes using SQL and Python. Write optimized queries (T-SQL, PL/SQL) for data manipulation across multiple RDBMS. Use Python libraries for data processing, analysis, and visualization. Perform EOD (end-of-day) data aggregation and reporting based on business needs. Work on Azure Synapse Analytics for scalable data transformations. Monitor and manage database performance across Oracle, SQL Server, Synapse, and PostgreSQL. Collaborate with cross-functional teams to understand and translate reporting requirements. Ensure secure data handling and compliance with organizational data policies. Debug Unix-based scripts and automate batch jobs as needed. Qualifications Bachelors/Masters degree in Computer Science, IT, or related field. 5-8 years of hands-on experience in data engineering and analytics. Solid understanding of database architecture and performance tuning. Experience in end-of-day reporting setups and cloud-based analytics platforms. (ref:hirist.tech)

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title : AI/ML Engineer Experience : 2 - 5 Years Location : Gurgaon Key Skills : Python, TensorFlow, Machine Learning Algorithms Employment Type : Full-Time Compensation : 8 - 15 LPA Job Description We are seeking a highly motivated and skilled AI/ML Engineer to join our growing team in Gurgaon. The ideal candidate should have hands-on experience in developing and deploying machine learning models using Python and TensorFlow. You will work on designing intelligent systems and solving real-world problems using cutting-edge ML algorithms. Key Responsibilities Design, develop, and deploy robust ML models for classification, regression, recommendation, and anomaly detection tasks. Implement and optimize deep learning models using TensorFlow or related frameworks. Work with cross-functional teams to gather requirements, understand data pipelines, and deliver ML-powered features. Clean, preprocess, and explore large datasets to uncover patterns and extract insights. Evaluate model performance using standard metrics and implement strategies for model improvement. Automate model training and deployment pipelines using best practices in MLOps. Collaborate with data scientists, software developers, and product managers to bring AI features into production. Document model architecture, data workflows, and code in a clear and organized manner. Stay updated with the latest research and advancements in machine learning and AI. Requirements Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. 2-5 years of hands-on experience in developing and deploying ML models in real-world projects. Strong programming skills in Python and proficiency in ML libraries like TensorFlow, scikit-learn, NumPy, Pandas. Solid understanding of supervised, unsupervised, and deep learning algorithms. Experience with data wrangling, feature engineering, and model evaluation techniques. Familiarity with version control tools (Git) and deployment tools is a plus. Good communication and problem-solving skills. Preferred (Nice To Have) Experience with cloud platforms (AWS, GCP, Azure) and ML services. Exposure to NLP, computer vision, or reinforcement learning. Familiarity with Docker, Kubernetes, or CI/CD pipelines for ML projects. (ref:hirist.tech)

Posted 1 week ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description 1 year of experience developing and training machine learning models using structured and unstructured data. Experience with LLMs, transformers, and generative AI (e.g., GPT, Claude, etc.). Knowledge of AI/LLM stacks such as LangChain, LangGraph, and vector databases. Exposure to conversational interfaces, AI copilots, and agent-based automation. Experience with deploying models in production environments (e.g., MLflow). Strong programming skills in Python (with libraries TensorFlow, PyTorch, scikit-learn, NumPy, etc.). Solid understanding of machine learning fundamentals, deep learning architectures, natural language processing, and computer vision. Knowledge of data processing frameworks (e.g., Pandas, Spark) and databases (SQL, NoSQL).

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

karnataka

On-site

As an Operations Research Scientist at Tredence, your main responsibilities will involve performing data analysis and interpretation. This includes analyzing large datasets to extract meaningful insights and using Python to process, visualize, and interpret data in a clear and actionable manner. Additionally, you will be tasked with developing and implementing mathematical models to address complex business problems, as well as improving operational efficiency. You should have the ability to use solvers like CPLEX, Gurobi, FICCO Xpress, and free solvers like Pulp and Pyomo. Applying optimization techniques and heuristic methods to devise effective solutions will be a key part of your role, along with designing and implementing algorithms to solve optimization problems. Testing and validating these algorithms to ensure accuracy and efficiency are also important tasks. Collaboration and communication are essential skills in this role, as you will work closely with cross-functional teams, including data scientists, engineers, and business stakeholders, to understand requirements and deliver solutions. Presenting findings and recommendations to both technical and non-technical audiences will be part of your regular interactions. To qualify for this position, you should have a Bachelor's or Master's degree in operations research, applied mathematics, computer science, engineering, or a related field. A minimum of 3-8 years of professional experience in operations research, data science, or a related field is required. Proficiency in Python, including libraries such as NumPy, pandas, SciPy, and scikit-learn, as well as Pulp and Pyspark, is necessary. A strong background in mathematical modeling and optimization techniques, experience with heuristic methods and algorithm development, and the ability to analyze complex datasets and derive actionable insights are also important technical skills. Furthermore, excellent communication skills, both written and verbal, and the ability to work effectively both independently and as part of a team are essential soft skills for this role. Preferred qualifications include experience with additional programming languages or tools such as AMPL, LINGO, and AIMMS, as well as familiarity with machine learning techniques and their applications in operations research. About Tredence: Tredence, founded in 2013, is dedicated to transforming data into actionable insights for over 50 Fortune 500 clients in various industries. With headquarters in San Jose and a presence in 5 countries, Tredence's mission is to be the world's most indispensable analytics partner. By blending deep domain expertise with advanced AI and data science, Tredence aims to drive unparalleled business value. Join Tredence on this innovative journey and contribute to the impactful work being done in the analytics field.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Python Programmer at IQVIA, you will be responsible for contributing as a product expert, analyzing and translating business needs into long-term solution data models. This includes evaluating existing data systems and collaborating with the development team to create conceptual data models and data flows. You will need to understand the distinctions between out-of-the-box features, configuration options, and customization possibilities of base software product platforms, while also considering performance, scalability, and usability factors. Your key responsibilities will involve developing and maintaining data ingestion, data analysis, and reporting solutions specifically for Clinical data. You will work closely with cross-functional teams to identify and address complex data-related challenges. Additionally, you will be expected to create and manage documentation, processes, and workflows related to the Clinical data review process. Troubleshooting and debugging issues related to data ingestion, analysis, and reporting will also be part of your daily tasks. In this role, you will serve as a subject matter expert and stay abreast of new technologies and industry trends to ensure that the solutions you develop are cutting-edge and effective. You should have experience and familiarity with data ingestion and processing capabilities, as well as the ability to apply design thinking principles to prototype and define architectures for extraction, transformation, and loading (ETL) processing. Expert-level programming skills in Python and PL/SQL, SAS, or the ability to read/interpret code and engage expert programmers will be essential. Proficiency in frameworks/libraries like Pandas, Numpy, and version control systems like GitHub is required. Candidates with experience working in GCP and regulated data environments, excellent communication, analytical, and interpersonal skills, and knowledge of Data Science will be preferred. Domain expertise in clinical data management and a fair understanding of CDISC/SDTM mapping standards are expected. Join IQVIA, a leading global provider of clinical research services, commercial insights, and healthcare intelligence, and be part of a team that creates intelligent connections to accelerate the development and commercialization of innovative medical treatments. Help improve patient outcomes and population health worldwide. Learn more about our work at [IQVIA Careers](https://jobs.iqvia.com).,

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Jr. Software Engineer (AI ML) at SynapseIndia, you will have the opportunity to work with cutting-edge technologies and contribute to the development of innovative solutions. With a focus on AI/ML, you will play a crucial role in designing, developing, and deploying machine learning models and AI algorithms using Python and relevant libraries. To excel in this role, you should hold a Bachelors or Masters degree in Computer Science, Engineering, Mathematics, or a related field. Additionally, having 0-2 years of professional experience in Python programming with a specialization in AI/ML is essential. Your strong experience with Python ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, along with a solid understanding of machine learning algorithms, neural networks, and deep learning will be highly valuable. Furthermore, your proficiency in data manipulation libraries such as Pandas, NumPy, and data visualization tools like Matplotlib, Seaborn, as well as experience with cloud platforms like AWS, GCP, Azure, and deploying ML models using Docker, Kubernetes will be beneficial. Any familiarity with NLP, Computer Vision, or other AI domains will be considered a plus. Your responsibilities will include collaborating with cross-functional teams to gather requirements and translate business problems into AI/ML solutions. You will be optimizing and scaling machine learning pipelines and systems for production, performing data preprocessing, feature engineering, and exploratory data analysis, and implementing and fine-tuning deep learning models using frameworks like TensorFlow, PyTorch, or similar. Additionally, you will be expected to conduct experiments, evaluate model performance using statistical methods, write clean, maintainable, and well-documented code, mentor junior developers, participate in code reviews, and stay up-to-date with the latest AI/ML research and technologies. Ensuring seamless model deployment and integration with existing infrastructure will also be part of your role. If you are a proactive individual with strong problem-solving skills, the ability to work independently and collaboratively, excellent communication skills, and familiarity with REST APIs, microservices architecture, version control systems like Git, MLOps best practices and tools, distributed computing, and big data tools like Spark or Hadoop, we encourage you to apply for this exciting opportunity at SynapseIndia. Join us and be a part of our dynamic team where your contributions are recognized and rewarded, and where you can grow both personally and professionally in a structured, eco-friendly workplace that prioritizes the well-being and job security of its employees.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

ahmedabad, gujarat

On-site

We are looking for a passionate and self-motivated AI Intern to join our team at ARK Infosoft. This internship opportunity is perfect for fresh graduates or final-year students who are enthusiastic about applying their knowledge of Artificial Intelligence, Machine Learning, and Data Science to real-world projects. As an AI Intern, you will have the chance to collaborate closely with senior developers and data scientists on a variety of AI-driven applications and innovations. Your responsibilities will include assisting in the design, development, and testing of machine learning models, as well as collecting, cleaning, and preprocessing datasets. Additionally, you will be involved in supporting the implementation of AI models into existing systems or proof-of-concepts, conducting literature reviews, and documenting technical research findings. Collaboration with cross-functional teams on various AI and software projects is also a key aspect of this role, along with staying up-to-date with the latest AI tools, libraries, and research. The ideal candidate should have a basic understanding of AI/ML concepts, algorithms, and applications, along with familiarity with Python and libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, or PyTorch. Strong analytical and problem-solving skills, good communication abilities, and a willingness to learn and take initiative are essential for success in this position. Preferred qualifications that would be nice to have include knowledge of deep learning, NLP, or computer vision, experience with Git, Jupyter Notebooks, or cloud platforms (e.g., Google Colab, AWS), and participation in AI/ML hackathons, Kaggle competitions, or relevant academic projects. Joining our team at ARK Infosoft, you can look forward to a supportive work environment with benefits such as 5-day working week, paid leave, all leave encashment, health insurance, festival celebrations, employee engagement activities, great workspace, and an annual picnic. Our mission at ARK Infosoft is to empower businesses to grow by providing personalized IT solutions through leveraging the latest technologies, prioritizing our customers, and continually seeking innovative ways to improve. We strive to be the most trusted partner in our clients" journey towards digital advancement, dedicated to excellence, integrity, and social responsibility, creating lasting value for our clients, employees, and communities. Our vision is to establish ARK Infosoft as a leading IT company known for delivering creative solutions that help businesses succeed in today's digital world. We aim to be the preferred partner for all IT needs, fostering growth and prosperity in every industry we serve through innovation, collaboration, and sustainability. Our goal is to drive positive global impact, enabling businesses to thrive in the digital age and shaping a brighter tomorrow for all.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

OneOrigin is an innovative company based in Scottsdale, AZ and Bangalore, India, dedicated to utilizing the power of AI to revolutionize the operations of Higher Education institutions. Their cutting-edge software solutions aim to streamline administrative processes, enhance student engagement, and facilitate data-driven decision-making for colleges and universities. As an AI Developer at OneOrigin in Bangalore, you will be responsible for creating and implementing advanced AI-driven solutions to empower intelligent products and optimize business operations across various domains. Working closely with data scientists, software engineers, product teams, and stakeholders from different departments, you will design scalable AI systems using cutting-edge machine learning techniques to address complex challenges. Your responsibilities will include designing, developing, and deploying sophisticated AI/ML models for real-world applications, leveraging AI frameworks like TensorFlow, PyTorch, Keras, and Hugging Face Transformers. You will also explore Generative AI applications, manipulate large datasets using tools such as Pandas, NumPy, SQL, and Apache Spark, and deploy scalable AI services using Docker, FastAPI/Flask, and Kubernetes. Collaboration with cross-functional teams to integrate AI solutions into products, drive experimentation, implement automation opportunities, and stay updated on AI advancements will be key aspects of your role. You are required to have a Bachelor's degree in Computer Science, Engineering, or a related field, along with 5-8 years of hands-on experience in AI/ML development. Proficiency in Python, experience with Deep Learning, NLP, Computer Vision projects, and Cloud platforms (AWS, GCP, Azure) is essential. Your strong problem-solving skills, communication abilities, and expertise in API development will be crucial for successfully tackling complex challenges and collaborating effectively across diverse teams and departments. Join OneOrigin in their mission to make AI accessible and impactful for higher education institutions!,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Data Scientist AI/ML Developer, you will be responsible for various key tasks related to data collection, analysis, model development, and implementation of advanced technologies. You should have a strong understanding of data science methodologies and experience in working with machine learning models. This position is suitable for freshers as well as candidates with up to 5 years of experience. Your primary responsibilities will include gathering, cleaning, and analyzing large datasets for diverse applications. You will also be involved in building, training, and evaluating machine learning and deep learning models for tasks related to computer vision, natural language processing (NLP), and generative AI. You will utilize OpenCV to create solutions for image classification, object detection, facial recognition, and other vision-based tasks. Additionally, you will develop and fine-tune Generative Adversarial Networks (GANs) for synthetic data generation and image manipulation. Collaboration with cross-functional teams is essential in this role, as you will work closely with data engineers, software developers, and business analysts to integrate data-driven insights into production environments. Staying updated with the latest developments in AI, data science, and generative AI is crucial, and you should be able to apply best practices to projects effectively. Ensuring that machine learning models and data pipelines are scalable, efficient, and ready for production deployment is also a key part of your responsibilities. In terms of required skills, you should have strong proficiency in computer vision techniques using OpenCV, hands-on experience with Generative Adversarial Networks (GANs), and familiarity with LangChain, GPT models, and other generative AI frameworks. Proficiency in Python and relevant ML libraries such as TensorFlow and PyTorch is essential. Expertise in data preprocessing, feature engineering, and working with large datasets is also required. Preferred skills include experience in Natural Language Processing (NLP), model deployment using cloud services and containerization, familiarity with transformer architectures, and knowledge of big data technologies. Additionally, possessing soft skills such as strong problem-solving abilities, effective communication, and a continuous learning mindset will be advantageous in this role.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies