Home
Jobs

3673 Pytorch Jobs - Page 31

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

6 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

We are seeking an experienced Product Manager-Data Management to lead the development and adoption of our 3rd party data platforms, including D&B and other similar platforms. The successful candidate will be responsible for driving the integration and utilization of 3rd party data across marketing campaigns, improving data quality and accuracy, and expanding the use cases and applications for 3rd party data. About the Role In this role as a Product Manager-Data Management , you will: Develop and execute a comprehensive strategy for 3rd party data platform adoption and expansion across the organization, with a focus on driving business outcomes and improving marketing effectiveness. Collaborate with marketing teams to integrate 3rd party data into their campaigns and workflows and provide training and support to ensure effective use of the data. Develop and showcase compelling use cases that demonstrate the value of 3rd party data in improving marketing effectiveness and measure the success of these use cases through metrics such as adoption rate, data quality, and marketing ROI. Develop and maintain a roadmap for 3rd party data platform adoption and expansion across the organization, with a focus on expanding use cases and applications for 3rd party data and developing new data-driven products and services. Monitor and measure the effectiveness of 3rd party data in driving business outcomes, and adjust the adoption strategy accordingly Work with cross-functional teams to ensure data quality and governance, and develop and maintain relationships with 3rd party data vendors to ensure seamless data integration and delivery. Drive the development of new data-driven products and services that leverage 3rd party data, and collaborate with stakeholders to prioritize and develop these products and services. Shift Timings: 2 PM to 11 PM (IST). Work from office for 2 days in a week (Mandatory). About You You’re a fit for the role of Product Manager - Data Management, if your background includes: 12+ years of experience in data management, product management, or a related field. Bachelor's or Master's degree in Computer Science, Data Science, Information Technology, or a related field. Experience with data management tools such as data warehousing, ETL (Extract, Transform, Load), data governance, and data quality. Understanding of the Marketing domain and data platforms such as Treasure Data, Salesforce, Eloqua, 6Sense, Alteryx, Tableau and Snowflake within a MarTech stack. Experience with machine learning and AI frameworks (e.g., TensorFlow, PyTorch). Expertise in SQL and Alteryx. Experience with data integration tools and technologies such as APIs, data pipelines, and data virtualization. Experience with data quality and validation tools and techniques such as data profiling, data cleansing, and data validation. Strong understanding of data modeling concepts, data architecture, and data governance. Excellent communication and collaboration skills. Ability to drive adoption and expansion of D&B data across the organization. Certifications in data management, data governance, or data science is nice to have. Experience with cloud-based data platforms (e.g., AWS, GCP, Azure) nice to have. Knowledge of machine learning and AI concepts, including supervised and unsupervised learning, neural networks, and deep learning nice to have. #LI-GS2 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here . Learn more on how to protect yourself from fraudulent job postings here . More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale Qualifications Education : Bachelor’s in Engineering or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less

Posted 6 days ago

Apply

10.0 years

6 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. The Applied Research team leads the way in pioneering AI and machine learning innovations, specializing in GenAI, Autonomous Agents, deep learning, NLP, large language models (LLMs/GPT), graph models, etc. Our research is dedicated to pushing the boundaries of technology, anticipating future needs, and seamlessly integrating AI innovations into our product suite. We aim to deliver groundbreaking solutions that not only address current demands but also establish the groundwork for transformative advancements in data and applied sciences. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities We are seeking an Applied Research Manager Responsibilities include: Team Leadership: Lead and mentor a team of applied researchers, fostering a collaborative and innovative work environment. Provide technical guidance and support to team members, helping them grow in their careers and achieve their goals. Drive the team's strategic vision and align it with the overall company objectives. Research and Development: Conduct cutting-edge research in Gen AI, autonomous agents, LLMs, data science, machine learning, and applied statistics to solve complex business problems. Collaborate with cross-functional teams to identify opportunities for applying advanced analytics and machine learning techniques. Stay abreast of the latest developments in the field and ensure the team remains at the forefront of technology. Project Management: Oversee the end-to-end lifecycle of data science projects, from problem definition to model deployment. Ensure projects are delivered on time and within scope, meeting both technical and business requirements. Collaborate with stakeholders to define project goals, success criteria, and deliverables. Technical Expertise: Provide technical leadership and contribute hands-on expertise in areas such as machine learning, statistical modeling, and data analysis. Review and guide the development of algorithms, models, and methodologies to solve complex problems. Collaboration and Communication: - Collaborate with cross-functional teams, including engineering, product management, and business stakeholders, to drive data science initiatives. Communicate complex technical concepts and findings to non-technical stakeholders in a clear and understandable manner. Qualifications Deep expertise of 10+ years in machine learning algorithms, model development, and deployment. Proven experience of 2+ years leading and mentoring a team of data scientists and machine learning engineers. Master's or Ph.D. in Computer Science, Machine Learning, or a related field. Proficiency in programming languages such as Python/Jupyter, TensorFlow, or PyTorch. - Experience with deploying machine learning models in production environments. Strong understanding of software engineering principles and best practices. Excellent problem-solving and analytical skills. Effective communication and collaboration skills. Additional experience in any of the areas like cyber security, fraud detection, regulations, business processes in enterprises, legal discovery, etc., would be beneficial, but not mandatory. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 6 days ago

Apply

3.0 years

2 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

General Information Locations : Hyderabad, Telangana, India Role ID 209546 Worker Type Regular Employee Studio/Department CTO - EA Digital Platform Work Model Hybrid Description & Requirements Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen. Software Engineer II - AI/ML Engineer The EA Digital Platform (EADP) group is the core powering the global EA ecosystem. We provide the foundation for all of EA’s incredible games and player experiences with high-level platforms like Cloud, Commerce, Data and AI, Gameplay Services, Identity and Social. By providing reusable capabilities that game teams can easily integrate into their work, we let them focus on making some of the best games in the world and creating meaningful relationships with our players. We’re behind the curtain, making it all work together. Come power the future of play with us. The Challenge Ahead: We are looking for developers who want to work on a large-scale distributed data system that empowers EA Games to personalize player experience and engagement. Responsibilities You will help with designing, implementing and optimizing the infrastructure for AI model training and deployment platform You will help with integrating AI capabilities into existing software systems and applications You will develop tools and systems to monitor the performance of the platform in real-time, analyzing key metrics, and proactively identify and address any issues or opportunities for improvement. You will participate in code reviews to maintain code quality and ensure best practices You will help with feature and operation enhancement for platform under senior guidance You will help with improving the stability and observability of the platform Qualifications Bachelor's degree or foreign degree equivalent in Computer Science, Electrical Engineering, or related field. 3+ years of experience with software development and model development Experience with a programming language such as Go, Java or Scala Experience with scripting languages such as bash, awk, python Experience with Scikit-Learn, Pandas, Matplotlib 3+ years of experience with Deep Learning frameworks like PyTorch, TensorFlow, CUDA Hands-on experience of any ML Platform (Sagemaker, Azure ML, GCP Vertex AI) Experience with cloud services and modern data technologies Experience with data streaming and processing systems About Electronic Arts We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth. We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do. Electronic Arts is an equal opportunity employer. All employment decisions are made without regard to race, color, national origin, ancestry, sex, gender, gender identity or expression, sexual orientation, age, genetic information, religion, disability, medical condition, pregnancy, marital status, family status, veteran status, or any other characteristic protected by law. We will also consider employment qualified applicants with criminal records in accordance with applicable law. EA also makes workplace accommodations for qualified individuals with disabilities as required by applicable law.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Overview: We are seeking a motivated and skilled AI Developer with 1–3 years of experience to join our growing tech team. The ideal candidate will have a strong understanding of machine learning algorithms, natural language processing (NLP), and hands-on experience in building and deploying AI models. You will work on innovative projects that leverage AI to solve real-world problems. Key Responsibilities: Develop, train, and optimize machine learning and deep learning models. Work with large datasets, preprocess and clean data for analysis and modeling. Implement NLP pipelines, recommendation systems, or computer vision models as required. Collaborate with data engineers, product managers, and developers to integrate AI solutions into production. Continuously evaluate and improve model performance. Stay updated with the latest advancements in AI and ML technologies. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. 1–3 years of hands-on experience in AI/ML development. Proficiency in Python and libraries such as TensorFlow, PyTorch, Scikit-learn, OpenCV, or Hugging Face. Strong understanding of machine learning concepts and statistical modeling. Familiarity with NLP, computer vision, or generative AI tools (e.g., LangChain, OpenAI API) is a plus. Experience with cloud platforms like AWS, GCP, or Azure is preferred. Good problem-solving skills and ability to work in a collaborative team environment. Nice to Have: Experience in deploying models using Flask, FastAPI, or Streamlit. Exposure to MLOps tools and practices. Knowledge of data visualization tools and techniques. Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

About the Role: We are looking for a Generative AI Developer with 6 + years of experience in building AI-driven applications using deep learning and NLP techniques. You will be responsible for designing, fine-tuning, and deploying generative AI models for various use cases, including text generation, image synthesis, and AI-powered automation solutions. Key Responsibilities: Develop and optimize Generative AI models (GPT, LLaMA, Stable Diffusion, DALL·E, MidJourney, etc.) Fine-tune LLMs and diffusion models to meet specific business needs. Implement retrieval-augmented generation (RAG) and integrate AI-powered applications into production. Work with prompt engineering, transfer learning, and custom model training . Develop and deploy AI models using cloud platforms (AWS, GCP, Azure) and MLOps best practices. Optimize model performance for scalability, efficiency, and cost-effectiveness. Work with vector databases (FAISS, Pinecone, Weaviate) to enhance AI applications. Stay updated with the latest trends in AI, deep learning, and NLP and apply research into real-world use cases. Required Skills & Qualifications: 4+ years of hands-on experience in AI/ML development with expertise in Generative AI . Proficiency in Python, TensorFlow, PyTorch, or JAX for deep learning model development. Strong experience with LLMs (GPT, BERT, T5, Claude, Gemini, etc.) and Transformer architectures . Knowledge of computer vision, NLP, and multimodal AI . Hands-on experience with Hugging Face, LangChain, OpenAI APIs , and fine-tuning techniques. Experience in deploying AI models using cloud platforms, Kubernetes, and Docker. Familiarity with MLOps, data pipelines, and vector databases . Strong problem-solving and analytical skills to tackle AI challenges. Preferred Skills: Experience in AI-powered chatbots, speech synthesis, or creative AI applications . Knowledge of distributed computing frameworks (Ray, Spark, Dask) . Understanding of Responsible AI practices , model bias mitigation, and explainability. Why Join Us? Work on cutting-edge AI solutions with real-world impact. Collaborate with leading AI researchers and engineers . Competitive salary, remote work flexibility, and upskilling opportunities. Show more Show less

Posted 6 days ago

Apply

0 years

0 - 0 Lacs

Hyderābād

Remote

GlassDoor logo

About Senda Senda is an independent Agentic AI-based platform, backed by marquee global investors, transforming financial wellness through AI-driven insights. Our award-winning technology empowers individuals and organizations worldwide to make informed financial decisions. Recognized among the Top 35 Fintech Companies in Asia and Top 50 Emerging Startups by NASSCOM, we are scaling rapidly and looking for problem solvers who thrive in dynamic, fast-paced environments. Role Overview As an AI/ML Engineer Intern at Senda, you will: Design, develop, and optimize Retrieval-Augmented Generation (RAG) models. Implement vector databases and advanced indexing techniques to efficiently store and retrieve relevant information for conversational contexts. Fine-tune and optimize large language models to enhance performance and accuracy. Apply NLP, NLU, and NLG techniques such as sentiment analysis, entity recognition, and text generation to improve conversational AI capabilities. What We’re Looking For Educational Background: Pursuing or recently completed a degree in Computer Science, Data Science, AI/ML, or a related field. Technical Skills: Proficiency in Python and familiarity with machine learning libraries such as LangChain, PyTorch, Hugging Face Transformers, and TensorFlow. Understanding of NLP Fundamentals: Knowledge of NLU/NLG techniques and experience with vector databases. Problem-Solving Abilities: Strong analytical skills with a pragmatic approach to building scalable and robust machine learning systems. Cloud Experience: Familiarity with cloud platforms like OCI or AWS is a plus. Why Join Us? Global Impact: Work on products that serve international markets, driving real-world impact at scale. Innovative Environment: Collaborate with a talented and passionate team that values innovation and continuous learning. Hands-On Experience: Gain practical experience in AI/ML development within the fintech industry. Flexible Work Arrangements: Enjoy a hybrid work model with the flexibility to work from the office and remotely. If you're excited about solving global financial challenges through cutting-edge technology, we want to hear from you! Job Type: Full-time Pay: ₹5,000.00 - ₹7,000.00 per month Benefits: Work from home Schedule: Monday to Friday Morning shift Weekend availability Application Question(s): How soon can you join? Work Location: Remote

Posted 6 days ago

Apply

3.0 - 8.0 years

15 - 17 Lacs

Bangalore Rural

Work from Office

Naukri logo

AI Developer Req number: R5051 Employment type: Full time Worksite flexibility: Hybrid Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We’re seeking a problem-solving AI Developer with 3–5 years of hands-on experience in building and deploying deep learning models, including RAG systems and LLM fine-tuning. You’ll work on cutting-edge projects, contribute to open-source initiatives, and solve complex AI challenges. This is a Full-time and Hybrid position. Job Description What You’ll Do Design and implement scalable deep learning models, including RAG architectures and LLM-based solutions. Fine-tune and optimize LLMs for specific use cases (e.g., text generation, summarization). Collaborate on open-source projects and research initiatives. Optimize model performance for latency, accuracy, and scalability. What You'll Need 3–5 years of professional experience in AI/ML development. Expertise in Python and frameworks like TensorFlow/PyTorch. Proven experience with neural networks (CNNs, RNNs, Transformers) and building RAG systems. Hands-on experience in fine-tuning LLMs (e.g., GPT, Llama, Mistral) for domain-specific tasks. Strong portfolio of open-source contributions (GitHub, Kaggle, etc.). Ability to debug, optimize, and deploy models in production environments. Critical thinker with a track record of research-driven problem-solving. Preferred Experience with diffusion models or reinforcement learning. Publications in top AI conferences (NeurIPS, ICML, CVPR). Familiarity with MLOps tools (MLflow, Kubeflow) and vector databases (Pinecone, FAISS). Knowledge of cloud platforms (AWS/GCP/Azure). Physical Demands This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc. Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor. Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.

Posted 6 days ago

Apply

5.0 - 8.0 years

15 - 30 Lacs

Pune

Hybrid

Naukri logo

Greetings from Blue Altair! Job Overview: We are seeking an experienced and highly skilled Data Science and AI Engineer to join our dynamic team. The ideal candidate will have 5+ years of experience working on cutting-edge data science and AI technologies across various cloud platforms with a strong focus to work on LLMs and SLMs. The role demands a professional capable of performing in a client-facing environment, as well as mentoring and guiding junior team members. Title: Consultant/Sr. Consultant - Data Science Engineer Experience : 5-8 years Location: Pune/Bangalore (Hybrid) Roles and responsibilities: Develop, implement, and optimize machine learning models and AI algorithms to solve complex business problems. Design, build, and fine-tune AI models, particularly focusing on LLMs and SLMs, using state-of-the-art techniques and architectures. Apply advanced techniques in prompt engineering, model fine-tuning, and optimization to tailor models for specific business needs. Deploy and manage machine learning models and pipelines on cloud platforms (AWS, GCP, Azure, etc.). Work closely with clients to understand their data and AI needs and provide tailored solutions. Collaborate with cross-functional teams to integrate AI solutions into broader software architectures. Mentor junior team members and provide guidance in implementing best practices in data science and AI development. Stay up-to-date with the latest trends and advancements in data science, AI, and cloud technologies. Prepare technical documentation and present insights to both technical and non-technical stakeholders. Requirement: 5+ years of experience in data science, machine learning, and AI technologies. Proven experience working with cloud platforms such as Google Cloud, Microsoft Azure, or AWS. Expertise in programming languages such as Python, R, Julia, and AI frameworks like TensorFlow, PyTorch, Scikit-learn, Hugging face Transformers. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Tableau) Solid understanding of data engineering concepts including ETL, data pipelines, and databases (SQL, NoSQL). Experience with MLOps practices and deployment of models in production environments. Familiarity with NLP (Natural Language Processing) tasks and working with large-scale datasets. Hands-on experience with generative AI models like GPT, Gemini, Claude, Mistral etc. Client-facing experience with strong communication skills to manage and engage stakeholders. Strong problem-solving skills and analytical mindset. Ability to work independently and as part of a team and mentor and provide technical leadership to junior team members.

Posted 6 days ago

Apply

3.0 - 8.0 years

2 - 4 Lacs

Gurgaon

On-site

GlassDoor logo

ROLES & RESPONSIBILITIES Job Description: Data Scientist 1. Expertise in Data Science & AI/ML: 3-8 years experience designing, developing, and deploying scalable AI/ML solutions for Big Data, with proficiency in Python, SQL, TensorFlow, PyTorch, Scikit-learn, and Big Data ML libraries (e.g., Spark MLlib). 2. Cloud Proficiency: Proven experience with cloud-based Big Data services (GCP preferred, AWS/Azure a plus) for AI/ML model deployment and Big Data pipelines.; understanding of data modeling, warehousing, and ETL in Big Data contexts. 3. Analytical & Communication Skills: Ability to extract actionable insights from large datasets, apply statistical methods, and effectively communicate complex findings to both technical and non-technical audiences (visualization skills a plus). 4. Educational Background: Bachelor's or Master's degree in a quantitative field (Computer Science, Data Science, Engineering, Statistics, Mathematics). EXPERIENCE 3-4.5 Years SKILLS Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): Python, Data Science, SQL, TensorFlow, Pytorch

Posted 6 days ago

Apply

8.0 years

0 Lacs

India

On-site

Linkedin logo

Coursera was launched in 2012 by Andrew Ng and Daphne Koller, with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 175 million registered learners as of March 31, 2025. Coursera partners with over 350 leading universities and industry leaders to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. About The Role Coursera is seeking a highly skilled and motivated Senior AI Specialist to join our team. This individual will play a pivotal role in developing and deploying advanced AI solutions that enhance our platform and transform the online learning experience. The ideal candidate has 5–8 years of experience , combining deep technical expertise with strong leadership and collaboration skills. This is a unique opportunity to work on cutting-edge projects in AI/ML, including recommendation systems, predictive analytics, and content optimization. We’re looking for someone who is not only a strong individual contributor but also capable of mentoring others and influencing technical direction across teams. Key Responsibilities Deploy and customize AI/ML solutions using platforms such as Google AI, AWS SageMaker, and other cloud-based tools. Design, implement, and optimize models for predictive analytics, semantic parsing, topic modeling, and information extraction. Enhance customer journey analytics to identify actionable insights and improve user experience across Coursera’s platform. Build and maintain AI pipelines for data ingestion, curation, training, evaluation, and model monitoring. Conduct advanced data preprocessing and cleaning to ensure high-quality model inputs. Analyze large-scale datasets (e.g., customer reviews, usage logs) to improve recommendation systems and platform features. Evaluate and improve the quality of video and audio content using AI-based techniques. Collaborate cross-functionally with product, engineering, and data teams to integrate AI solutions into user-facing applications. Support and mentor team members in AI/ML best practices and tools. Document workflows, architectures, and troubleshooting steps to support long-term scalability and knowledge sharing. Stay current with emerging AI/ML trends and technologies, advocating for their adoption where applicable. Qualifications Education Bachelor’s degree in Computer Science, Machine Learning, or a related technical field (required). Master’s or PhD preferred. Experience 5–8 years of experience in AI/ML development with a strong focus on building production-grade models and pipelines. Proven track record in deploying scalable AI solutions using platforms like Google Vertex AI, AWS SageMaker, Microsoft Azure, or Databricks. Strong experience with backend integration, API development, and cloud-native services. Technical Skills Programming: Advanced proficiency in Python (including libraries like TensorFlow, PyTorch, Scikit-learn). Familiarity with Java or similar languages is a plus. Data Engineering: Expertise in handling large datasets using PySpark, AWS Glue, Apache Airflow, and S3. Databases: Solid experience with both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) systems. Cloud: Hands-on experience with cloud platforms (AWS, GCP) and tools like Vertex AI, SageMaker, BigQuery, Lambda, etc. Soft Skills & Leadership Attributes (Senior Engineer Level) Technical leadership: Ability to drive end-to-end ownership of AI/ML projects—from design through deployment and monitoring. Collaboration: Skilled at working cross-functionally with product managers, engineers, and stakeholders to align on priorities and deliver impactful solutions. Mentorship: Experience mentoring junior engineers and fostering a culture of learning and growth within the team. Communication: Clear communicator who can explain complex technical concepts to non-technical stakeholders. Problem-solving: Proactive in identifying challenges and proposing scalable, maintainable solutions. Adaptability: Comfortable working in a fast-paced, evolving environment with changing priorities and goals. Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Ford/GDIA Mission and Scope: At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow’s transportation. Creating the future of smart mobility requires the highly intelligent use of data, metrics, and analytics. That’s where you can make an impact as part of our Global Data Insight & Analytics team. We are the trusted advisers that enable Ford to clearly see business conditions, customer needs, and the competitive landscape. With our support, key decision-makers can act in meaningful, positive ways. Join us and use your data expertise and analytical skills to drive evidence-based, timely decision-making. The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. About the Role: You will be part of the FCSD analytics team, playing a critical role in leveraging data science to drive significant business impact within Ford Customer Service Division. As a Data Scientist, you will translate complex business challenges into data-driven solutions. This involves partnering closely with stakeholders to understand problems, working with diverse data sources (including within GCP), developing and deploying scalable AI/ML models, and communicating actionable insights that deliver measurable results for Ford. Responsibilities Job Responsibilities: Build an in-depth understanding of the business domain and data sources, demonstrating strong business acumen. Extract, analyze, and transform data using SQL for insights. Apply statistical methods and develop ML models to solve business problems. Design and implement analytical solutions, contributing to their deployment, ideally leveraging Cloud environments. Work closely and collaboratively with Product Owners, Product Managers, Software Engineers, and Data Engineers within an agile development environment. Integrate and operationalize ML models for real-world impact. Monitor the performance and impact of deployed models, iterating as needed. Present findings and recommendations effectively to both technical and non-technical audiences to inform and drive business decisions. Qualifications Qualifications: At least 3 years of relevant professional experience applying data science techniques to solve business problems. This includes demonstrated hands-on proficiency with SQL and Python. Bachelor's or Master's degree in a quantitative field (e.g., Statistics, Computer Science, Mathematics, Engineering, Economics). Hands-on experience in conducting statistical data analysis (EDA, forecasting, clustering, hypothesis testing, etc.) and applying machine learning techniques (Classification/Regression, NLP, time-series analysis, etc.). Technical Skills: Proficiency in SQL, including the ability to write and optimize queries for data extraction and analysis. Proficiency in Python for data manipulation (Pandas, NumPy), statistical analysis, and implementing Machine Learning models (Scikit-learn, TensorFlow, PyTorch, etc.). Working knowledge in a Cloud environment (GCP, AWS, or Azure) is preferred for developing and deploying models. Experience with version control systems, particularly Git. Nice to have: Exposure to Generative AI / Large Language Models (LLMs). Functional Skills: Proven ability to understand and formulate business problem statements. Ability to translate Business Problem statements into data science problems. Strong problem-solving ability, with the capacity to analyze complex issues and develop effective solutions. Excellent verbal and written communication skills, with a demonstrated ability to translate complex technical information and results into simple, understandable language for non-technical audiences. Strong business engagement skills, including the ability to build relationships, collaborate effectively with stakeholders, and contribute to data-driven decision-making. Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization.The candidate should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Responsibilities Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making by FCSD business team Design and implement data analysis and ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness Qualifications Master's degree in computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. Proficient in querying and analyzing large datasets using BigQuery on GCP. Strong Python skills for data wrangling and automation. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or QlikSense. 1+ years' experience in SQL programming language and relational databases. Show more Show less

Posted 6 days ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Roles and Responsibilities: Lead the design, development, and implementation of AI/ML-based solutions across various product lines Collaborate with product managers, data engineers, and architects to translate business requirements into data science problems and solutions Take ownership of end-to-end AI/ML modules, from data processing to model development, testing, and deployment Provide technical leadership to a team of data scientists, ensuring high-quality outputs and adherence to best practices Conduct cutting-edge research and capability building across the latest Machine Learning, Deep Learning, and AI technologies Prepare technical documentation, including high-level and low-level design, requirement specifications, and white papers Evaluate and fine-tune models, ensuring they meet performance requirements and deliver insights that drive product improvements Production exposure to Large Language Models (LLM) and experience in implementing and optimizing LLM-based solutions Must-have Skills: 5-10 years of experience in Data Science and AI/ML product development, with a proven track record of leading technical teams Expertise in machine learning algorithms, Deep Learning models, Natural Language Processing, and Anomaly Detection Strong understanding of model lifecycle management, including model building, evaluation, and optimization Hands-on experience with Python and proficiency with frameworks like TensorFlow, Keras, PyTorch, etc Solid understanding of SQL, NoSQL databases, and data modeling with ElasticSearch experience Ability to manage multiple projects simultaneously in a fast-paced, agile environment Excellent problem-solving skills and communication abilities, particularly in documenting and presenting technical concepts Familiarity with Big Data frameworks such as Spark, Storm, Databricks, and Kafka Experience with container technologies like Docker and orchestration tools like Kubernetes, ECS, or EKS Optional (Good To Have) Skills: Experience with cloud-based machine learning platforms like AWS, Azure, or Google Cloud Experience with tools like MLFlow, KubeFlow, or similar for model tracking and orchestration Exposure to NoSQL databases such as MongoDB, Cassandra, Redis, and Cosmos DB, and familiarity with indexing mechanisms Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 6 days ago

Apply

2.0 years

3 Lacs

Coimbatore

On-site

GlassDoor logo

Technical Expertise : (minimum 2 year relevant experience) ● Solid understanding of Generative AI models and Natural Language Processing (NLP) techniques, including Retrieval-Augmented Generation (RAG) systems, text generation, and embedding models. ● Exposure to Agentic AI concepts, multi-agent systems, and agent development using open-source frameworks like LangGraph and LangChain. ● Hands-on experience with modality-specific encoder models (text, image, audio) for multi-modal AI applications. ● Proficient in model fine-tuning, prompt engineering, using both open-source and proprietary LLMs. ● Experience with model quantization, optimization, and conversion techniques (FP32 to INT8, ONNX, TorchScript) for efficient deployment, including edge devices. ● Deep understanding of inference pipelines, batch processing, and real-time AI deployment on both CPU and GPU. ● Strong MLOps knowledge with experience in version control, reproducible pipelines, continuous training, and model monitoring using tools like MLflow, DVC, and Kubeflow. ● Practical experience with scikit-learn, TensorFlow, and PyTorch for experimentation and production-ready AI solutions. ● Familiarity with data preprocessing, standardization, and knowledge graphs (nice to have). ● Strong analytical mindset with a passion for building robust, scalable AI solutions. ● Skilled in Python, writing clean, modular, and efficient code. ● Proficient in RESTful API development using Flask, FastAPI, etc., with integrated AI/ML inference logic. ● Experience with MySQL, MongoDB, and vector databases like FAISS, Pinecone, or Weaviate for semantic search. ● Exposure to Neo4j and graph databases for relationship-driven insights. ● Hands-on with Docker and containerization to build scalable, reproducible, and portable AI services. ● Up-to-date with the latest in GenAI, LLMs, Agentic AI, and deployment strategies. ● Strong communication and collaboration skills, able to contribute in cross-functional and fast-paced environments. Bonus Skills ● Experience with cloud deployments on AWS, GCP, or Azure, including model deployment and model inferencing. ● Working knowledge of Computer Vision and real-time analytics using OpenCV, YOLO, and similar Job Type: Full-time Pay: From ₹300,000.00 per year Schedule: Day shift Experience: AI Engineer: 1 year (Required) Work Location: In person Expected Start Date: 23/06/2025

Posted 6 days ago

Apply

0.0 - 2.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

The Role- As an AI Engineer , you will be responsible for building and optimizing AI-first solutions that power BotPenguin’s conversational and Agentic capabilities. You will work on LLM integrations, NLP pipelines, and machine learning models, while collaborating with cross-functional teams to deliver intelligent experiences at scale. This is a high-impact role that combines engineering, research, and deployment skills to solve real-world problems using artificial intelligence. What you need for this role- Education: Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related discipline. Experience: 2–5 years of experience working in AI/ML or related software engineering roles. Technical Skills: Strong proficiency in Python and libraries such as scikit-learn, PyTorch, TensorFlow, Transformers (Hugging Face). Hands-on experience with LLMs (OpenAI, Claude, LLaMA) and building AI agents using API integrations. Experience working with NLP tasks (intent classification, text generation, embeddings, summarization). Familiarity with Vector Databases like Pinecone, FAISS, Elastic Vector DB. Understanding of Prompt Engineering, RAG (Retrieval-Augmented Generation), and embedding generation. Proficiency in building and deploying ML models via Docker/Kubernetes or cloud services like AWS/GCP. Experience with version control systems (GitLab/GitHub) and working in Agile teams. Soft Skills: Strong analytical thinking and problem-solving capabilities. Passion for research, innovation, and applying AI to real-world use-cases. Excellent communication skills and the ability to collaborate across departments. Attention to detail with a focus on model accuracy, explainability, and performance. What you will be doing- Design, build, and optimize AI-powered chatbot features and virtual agents using state-of-the-art models. Collaborate with the Product, Backend, and UI teams to integrate intelligent workflows into the BotPenguin platform. Build, evaluate, and fine-tune language models and NLP components tailored to user use-cases. Implement context-aware chat solutions using embeddings, vector stores, and retrieval mechanisms. Create internal tools for prompt testing, versioning, and debugging AI responses. Monitor model performance metrics such as latency, hallucination rate, and user satisfaction. Explore research papers, open-source innovations, and contribute to rapid experimentation. Write clean, modular, and testable code along with clear documentation for future scalability. Any other development related tasks as required for BotPenguin. Guiding, reviewing the code written by junior members in the team. Top reasons to work with us- Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹300,000.00 - ₹800,000.00 per year Benefits: Flexible schedule Health insurance Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: AI: 2 years (Required) Work Location: In person

Posted 6 days ago

Apply

0 years

0 - 0 Lacs

Coimbatore

On-site

GlassDoor logo

We are seeking Cloud Data Engineer Intern, who will be part of the Engineering team and collaborating with software development, quality assurance, and IT operations teams to build and maintain systems that collect, manage, and convert raw data into information that can be used by business analysts and data scientists. This role requires a engineer who is passionate about working with large amount of data & analytics. We are a small team of highly skilled engineers and looking forward to adding a new member who wishes to advance in one's career by continuous learning. Selected candidates will be an integral part of a team of passionate and enthusiastic IT professionals, and have tremendous opportunities to contribute to the success of the products that we build. What you will do Ideal candidate will be responsible for designing, building and maintaining data solutions and workflows in the Cloud Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and backend engineers, product managers, and analysts. Engineer solutions using LLMs, Python, SQL Resolving data problems across multiple application domains and platforms using system troubleshooting and problem-solving techniques Collaborate with development, QA, and operations teams to design and implement data pipelines. Defines company data assets (data models), and jobs to populate data models. Designs data integrations and data quality framework. Designs and evaluates open source and vendor tools for data lineage. Works closely with all business units and engineering teams to develop strategy for long term data platform architecture. Promotes knowledge sharing activities within and across different product teams by creating and engaging in communities of practice and through documentation, training, and mentoring Keep skills up to date through ongoing self-directed training What skills are required Ability to learn new technologies quickly. Ability to work both independently and in collaborative teams to communicate design and build ideas effectively. Problem-solving, and critical-thinking skills including ability to organize, analyze, interpret, and disseminate information. Excellent spoken and written communication skills Must be able to work as part of a diverse team, as well as independently Ability to follow departmental and organizational processes and meet established goals and deadlines Experience with LLMs, PyTorch, TensorFlow. Prompt Engineering will be a plus Working Knowledge in Java, Xml, Json, SQL Knowledge of scripting and automation using Python, Bash, Perl to automate AWS tasks Bachelor's degree in Engineering or Masters degree in computer science. Note : Candidates who have passed out in the year 2023 or 2024 can only apply for this Internship. This is Internship to Hire position and Candidates who complete the internship will be offered full-time position based on performance Job Types: Full-time, Permanent, Fresher, Internship Contract length: 6 months Pay: ₹5,500.00 - ₹7,000.00 per month Schedule: Day shift Monday to Friday Morning shift Expected Start Date: 01/07/2025

Posted 6 days ago

Apply

12.0 years

5 - 6 Lacs

Noida

On-site

GlassDoor logo

Description Job Title: Solution Architect Designation : Senior Company: Hitachi Rail GTS India Location: Noida, UP, India Salary: As per Industry Company Overview: Hitachi Rail is right at the forefront of the global mobility sector following the acquisition. The closing strengthens the company's strategic focus on helping current and potential Hitachi Rail and GTS customers through the sustainable mobility transition – the shift of people from private to sustainable public transport, driven by digitalization. Position Overview: We are looking for a Solution Architect that will be responsible for translating business requirements into technical solutions, ensuring the architecture is scalable, secure, and aligned with enterprise standards. Solution Architect will play a crucial role in defining the architecture and technical direction of the existing system. you will be responsible for the design, implementation, and deployment of solutions that integrate with transit infrastructure, ensuring seamless fare collection, real-time transaction processing, and enhanced user experiences. You will collaborate with development teams, stakeholders, and external partners to create scalable, secure, and highly available software solutions. Job Roles & Responsibilities: Architectural Design : Develop architectural documentation such as solution blueprints, high-level designs, and integration diagrams. Lead the design of the system's architecture, ensuring scalability, security, and high availability. Ensure the architecture aligns with the company's strategic goals and future vision for public transit technologies. Technology Strategy : Select the appropriate technology stack and tools to meet both functional and non-functional requirements, considering performance, cost, and long-term sustainability. System Integration : Work closely with teams to design and implement the integration of the AFC system with various third-party systems (e.g., payment gateways, backend services, cloud infrastructure). API Design & Management : Define standards for APIs to ensure easy integration with external systems, such as mobile applications, ticketing systems, and payment providers. Security & Compliance : Ensure that the AFC system meets the highest standards of data security, particularly for payment information, and complies with industry regulations (e.g., PCI-DSS, GDPR). Stakeholder Collaboration : Act as the technical lead during project planning and discussions, ensuring the design meets customer and business needs. Technical Leadership : Mentor and guide development teams through best practices in software development and architectural principles. Performance Optimization : Monitor and optimize system performance to ensure the AFC system can handle high volumes of transactions without compromise. Documentation & Quality Assurance : Maintain detailed architecture documentation, including design patterns, data flow, and integration points. Ensure the implementation follows best practices and quality standards. Research & Innovation : Stay up to date with the latest advancements in technology and propose innovative solutions to enhance the AFC system. Skills: 1. Equipment Programming Languages DotNet (C#), C/C++, Java, Python 2. Web Development Frameworks ASP.NET Core (C#), Angular 3. Microservices & Architecture Spring Cloud, Docker, Kubernetes, Istio, Apache Kafka, RabbitMQ, Consul, GraphQL 4. Cloud Platforms Amazon Web Services (AWS) Google Cloud Platform (GCP) Microsoft Azure Kubernetes on Cloud (e.g., AWS EKS, GCP GKE) Terraform (Infrastructure as Code) 5. Databases Relational Databases (SQL) NoSQL Databases Data Warehousing 6. API Technologies SOAP/RESTful API Design GraphQL gRPC OpenAPI / Swagger (API Documentation) 7. Security Technologies OAuth2 / OpenID Connect (Authentication & Authorization) JWT (JSON Web Tokens) SSL/TLS Encryption OWASP Top 10 (Security Best Practices) Vault (Secret Management) Keycloak (Identity & Access Management) 8. Design & Architecture Tools UML (Unified Modeling Language) Lucidchart / Draw.io (Diagramming) PlantUML (Text-based UML generation) C4 Model (Software architecture model) Enterprise Architect (Modeling) 9. Miscellaneous Tools & Frameworks Apache Hadoop / Spark (Big Data) Elasticsearch (Search Engine) Apache Kafka (Stream Processing) TensorFlow / PyTorch (Machine Learning/AI) Redis (Caching & Pub/Sub) DevOps & CI/CD Tools Education: Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Experience Required: 12+ years of experience in solution architecture or software design. Proven experience with enterprise architecture frameworks (e.g., TOGAF, Zachman). Strong understanding of cloud platforms (AWS, Azure, or Google Cloud). Experience in system integration, API design, microservices, and SOA. Familiarity with data modeling and database technologies (SQL, NoSQL). Strong communication and stakeholder management skills. Preferred: Certification in cloud architecture (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Experience with DevOps tools and CI/CD pipelines. Knowledge of security frameworks and compliance standards (e.g., ISO 27001, GDPR). Experience in Agile/Scrum environments. Domain knowledge in [insert industry: e.g., finance, transportation, healthcare]. Soft Skills: Analytical and strategic thinking. Excellent problem-solving abilities. Ability to lead and mentor cross-functional teams. Strong verbal and written communication.

Posted 6 days ago

Apply

7.0 years

3 - 9 Lacs

Noida

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Work with large, diverse datasets to deliver predictive and prescriptive analytics Develop innovative solutions using data modeling, machine learning, and statistical analysis Design, build, and evaluate predictive and prescriptive models and algorithms Use tools like SQL, Python, R, and Hadoop for data analysis and interpretation Solve complex problems using data-driven approaches Collaborate with cross-functional teams to align data science solutions with business goals Lead AI/ML project execution to deliver measurable business value Ensure data governance and maintain reusable platforms and tools Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Technical Skills Programming Languages: Python, R, SQL Machine Learning Tools: TensorFlow, PyTorch, scikit-learn Big Data Technologies: Hadoop, Spark Visualization Tools: Tableau, Power BI Cloud Platforms: AWS, Azure, Google Cloud Data Engineering: Talend, Data Bricks, Snowflake, Data Factory Statistical Software: R, Python libraries Version Control: Git Preferred Qualifications: Master’s or PhD in Data Science, Computer Science, Statistics, or related field Certifications in data science or machine learning 7+ years of experience in a senior data science role with enterprise-scale impact Experience managing AI/ML projects end-to-end Solid communication skills for technical and non-technical audiences Demonstrated problem-solving and analytical thinking Business acumen to align data science with strategic goals Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Nic

Posted 6 days ago

Apply

5.0 years

1 - 6 Lacs

Ghaziabad

Remote

GlassDoor logo

We're not your average tech company! Rightcrowd is a global leader in keeping people safe and organizations secure. We build smart solutions that manage who's on-site, what access they have, and ensure everything is compliant. Think big names – Fortune 50 and ASX 10 companies rely on us to tackle their toughest security challenges. We're a passionate, global team with offices in places like Australia, the USA, Belgium, India, and the Philippines, and we're on a mission to make the world a safer place, one clever line of code at a time. What We Offer Competitive salary and benefits. A collaborative, respectful, and high-accountability team culture. Opportunities for growth within a global finance team. Flexibility in a remote-friendly and dynamic work environment. Key Responsibilities: 1. Agentic AI Development & Code Generation: Design, implement, and lead the development of an advanced agentic AI system that can interpret user stories and autonomously generate production-ready code Research and implement state-of-the-art techniques in LLM prompting, fine-tuning, and orchestration to achieve optimal code generation results Architect robust evaluation systems to test, validate, and improve AI-generated code Seamlessly integrate AI-generated code into existing CI/CD pipelines Collaborate with product and engineering teams to refine inputs and ensure alignment with development standards 2. AI Recommendation Engine Architecture: Architect a high-performance, scalable recommendation engine for the RightCrowd SmartAccess platform Design and implement AI-driven feature extraction pipelines Develop hybrid recommendation algorithms leveraging both collaborative filtering and content- based approaches Engineer systems for real-time inference and continuous model improvement Ensure the architecture integrates seamlessly with existing product infrastructure 3. Natural Language Reporting System Development: Lead the end-to-end development of an AI-powered reporting system that translates natural language queries to SQL Engineer robust NLP components to accurately understand and parse user requests Build a reliable translation layer between natural language and database queries Implement data retrieval, processing, and presentation mechanisms Design systems for generating insights and visualizations from retrieved data• Ensure appropriate data security and access controls 4. Technical Leadership & Innovation: Provide architectural guidance across AI initiatives within the organization Design scalable, maintainable AI systems with consideration for cloud infrastructure and MLOps best practices Stay at the forefront of advancements in LLMs, agentic AI, and code generation technologies Mentor junior engineers and foster a culture of AI innovation Collaborate effectively with cross-functional teams Required Qualifications & Skills: Master's or Ph.D. in Computer Science, AI, or related field (or equivalent practical experience) 5+ years of experience building production-level AI systems with significant focus on LLMs and generative AI Demonstrated expertise in Python and modern AI frameworks (PyTorch, TensorFlow, Hugging Face Transformers) Extensive experience with LLM orchestration frameworks (LangChain, LangGraph, or similar) Proven track record designing and implementing agentic AI systems Strong proficiency in prompt engineering, fine-tuning, and optimization of LLMs Expert-level SQL skills and experience with database systems Experience with cloud platforms (AWS, Azure, GCP) and their AI/ML services Solid understanding of software engineering principles and best practices Experience with MLOps and CI/CD pipelines for AI systems Preferred Qualifications: Experience developing systems that generate production-ready code Familiarity with retrieval-augmented generation (RAG) techniques Experience with vector databases (Pinecone, Weaviate, etc.) Knowledge of containerization technologies (Docker, Kubernetes) Experience with model optimization, quantization, and efficient inference Contributions to open-source AI projects or research publications Familiarity with physical security or identity access management domains Experience implementing AI systems with strong security and governance controls This position offers the opportunity to work at the forefront of AI engineering, developing systems that not only generate code but also function autonomously to deliver real business

Posted 6 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Develop and implement AI-driven reporting solutions to improve data analytics and business intelligence. Collaborate with cross-functional teams to understand reporting requirements and translate them into technical specifications. Design, develop, and maintain interactive dashboards and reports using tools like Power BI, Tableau, or similar. Integrate AI models and algorithms into reporting solutions to provide predictive and prescriptive insights. Optimize data models and reporting processes for performance and scalability. Conduct data analysis to identify trends, patterns, and insights that can drive business decisions. Ensure data accuracy, consistency, and integrity in all reporting solutions. Stay updated with the latest advancements in AI and reporting technologies and apply them to improve existing solutions. Provide training and support to end-users on how to use and interpret AI-driven reports. Consult with the Data & Analytics team and Reporting Factory developers to build the required data infrastructure needed to host and run Reporting Gen Ai solutions Responsibilities Develop and implement AI-driven reporting solutions to improve data analytics and business intelligence. Collaborate with cross-functional teams to understand reporting requirements and translate them into technical specifications. Design, develop, and maintain interactive dashboards and reports using tools like Power BI, Tableau, or similar. Integrate AI models and algorithms into reporting solutions to provide predictive and prescriptive insights. Optimize data models and reporting processes for performance and scalability. Conduct data analysis to identify trends, patterns, and insights that can drive business decisions. Ensure data accuracy, consistency, and integrity in all reporting solutions. Stay updated with the latest advancements in AI and reporting technologies and apply them to improve existing solutions. Provide training and support to end-users on how to use and interpret AI-driven reports. Consult with the Data & Analytics team and Reporting Factory developers to build the required data infrastructure needed to host and run Reporting Gen Ai solutions Qualifications Bachelor’s degree in computer science, Data Science, AI/ML, or a related field. 7-9 years overall experience; 4+ years of professional experience working directly on the design, development and rollout of AI/ML/GenAi Solutions Proven experience in developing AI-driven reporting solutions. Experience with AI and machine learning frameworks like TensorFlow, PyTorch, or similar. Proficiency in programming languages such as Python, R, or SQL. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. Experience with Azure cloud platforms like, AWS, or Google Cloud is a plus. Experience / involvement in organization wide digital transformations preferred Knowledge of natural language processing (NLP) and its application in reporting. Experience with big data technologies like Hadoop, Spark, or similar. Familiarity with data warehousing concepts and tools. Understanding of business intelligence and data analytics best practices. Show more Show less

Posted 6 days ago

Apply

15.0 years

3 - 5 Lacs

Noida

On-site

GlassDoor logo

We are looking for a passionate and curious AI/ML Engineer (Fresher) to join our growing engineering team. This is a unique opportunity to work on real-world machine learning applications and contribute to building cutting-edge AI solutions. Your Responsibilities: Assist in designing, developing, and training machine learning models using structured and unstructured data Collect, clean, and preprocess large datasets for model building Perform exploratory data analysis and statistical modeling Collaborate with senior data scientists and engineers to build scalable AI systems Run experiments, tune hyperparameters, and evaluate model performance using industry-standard metrics Document models, processes, and experiment results clearly and consistently Support in integrating AI/ML models into production environments Stay updated with the latest trends and techniques in machine learning, deep learning, and AI Participate in code reviews, sprint planning, and product discussions Follow best practices in software development, version control, and model reproducibility Skill Sets / Experience We Require: Strong understanding of machine learning fundamentals (regression, classification, clustering, etc.) Hands-on experience with Python and ML libraries such as scikit-learn, pandas, NumPy Basic familiarity with deep learning frameworks like TensorFlow, PyTorch, or Keras Knowledge of data preprocessing, feature engineering, and model validation techniques Understanding of probability, statistics, and linear algebra Familiarity with tools like Jupyter, Git, and cloud-based notebooks Problem-solving mindset and eagerness to learn Good communication skills and the ability to work in a team Internship/project experience in AI/ML is a plus Education: B.Tech / M.Tech / M.Sc in Computer Science, Data Science, Artificial Intelligence, or related field Relevant certifications in AI/ML (Coursera, edX, etc.) are a plus About Us: TechAhead is a global digital transformation company with a strong presence in the USA and India. We specialize in AI-first product design thinking and bespoke development solutions . With over 15 years of proven expertise, we have partnered with Fortune 500 companies and leading global brands to drive digital innovation and deliver excellence. At TechAhead, we are committed to continuous learning, growth and crafting tailored solutions that meet the unique needs of our clients. Join us to shape the future of digital innovation worldwide and drive impactful results with cutting-edge AI tools and strategies!

Posted 6 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Overview: The GenAI Technical Solutions Architect develops and implements GEN AI achitecture strategies, best practices, and standards to enhance AI ML model deployment and monitoring efficiency. Develop architecture roadmap and strategy for GenAI Platforms and tech stacks. This role will focus on the technical development, deployment, Code reuse and optimization of GenAI solutions, ensuring alignment with business strategies and technological advancements. Responsibilities Responsibilities: Develop cutting-edge architectural strategies for Gen AI components and platforms, leveraging advanced techniques such as chunking, Retrieval-Augmented Generation (RAG), Ai agents, and embeddings. Balance build versus buy decisions, ensuring alignment with SaaS models and decision trees, particularly for the PepGenX platform. Emphasize low coupling and cohesive model development. Lead working sessions for Arch Alignment, pattern library development, GEN AI Tools Data Architect alignment, tag new components to reuse (components reuse strategy), patterns of the usecases, reuse components ( save efforts, time money). Lead the implementation of LLM operations, focusing on optimizing model performance, scalability, and efficiency. Design and implement LLM agentic processes to create autonomous AI systems capable of complex decision-making and task execution. Work closely with data scientists and AI professionals to identify and pilot innovative use cases that drive digital transformation. Assess the feasibility of these use cases, aligning them with business objectives, ROI and leveraging advanced AI techniques. Gather inputs from various stakeholders to align technical implementations with current and future requirements. Develop processes and products based on these inputs, incorporating state-of-the-art AI methodologies. Define AI architecture and select suitable technologies, with a focus on integrating RAG systems, embedding models, and advanced LLM frameworks. Decide on optimal deployment models, ensuring seamless integration with existing data management and analytics tools. Audit AI tools and practices, focusing on continuous improvement of LLM ops and agentic processes. Collaborate with security and risk leaders to mitigate risks such as data poisoning and model theft, ensuring ethical AI implementation. Stay updated on AI regulations and map them to best practices in AI architecture and pipeline planning. Develop expertise in ML and deep learning workflow architectures, with a focus on chunking strategies, embedding pipelines, and RAG system implementation. Apply advanced software engineering and DevOps principles, utilizing tools like Git, Kubernetes, and CI/CD for efficient LLM ops. Collaborate across teams to ensure AI platforms meet both business and technical requirements. Spearhead the exploration and application of cutting-edge Large Language Models (LLMs) and Generative AI, including multi-modal capabilities and agentic processes. Oversee MLOps, automating ML pipelines from training to deployment with a focus on RAG and embedding optimization. Engage in sophisticated model development from ideation to deployment, leveraging advanced chunking and RAG techniques. Effectively communicate complex analysis results to business partners and executives. Proactively reduce biases in model predictions, focusing on fair and inclusive AI systems through advanced debiasing techniques in embeddings and LLM training. Design efficient data pipelines to support large language model training and inference, with a focus on optimizing chunking strategies and embedding generation for RAG systems. Qualifications Experience: Proven track record in shipping products and developing state-of-the-art Gen AI product architecture. 10+ years of experience with a strong balance of business acumen and technical expertise in AI. 5+ years in building and releasing NLP/AI software, with specific experience in RAG , Agents systems and embedding models. Demonstrated experience in delivering Gen AI products, including Multi-modal LLMs, Foundation models, and agentic AI systems. Deep familiarity with cloud technologies, especially Azure, and experience deploying models for large-scale inference using advanced LLM ops techniques. Proficiency in PyTorch, TensorFlow, Kubernetes, Docker, LlamaIndex, LangChain, LLM, SLM, LAM, and cloud platforms, with a focus on implementing RAG and embedding pipelines. Excellent communication and interpersonal skills, with a strong design capability and ability to articulate complex AI concepts to diverse audiences. Hands-on experience with chunking strategies, RAG implementation, and optimizing embedding models for various AI applications. Qualifications: Bachelor’s or master’s degree in computer science, Data Science, or a related technical field. Demonstrated ability to translate complex technical concepts into actionable business strategies. Experience in data-driven decision-making processes. Strong communication skills, with the ability to collaborate effectively with both technical and non-technical stakeholders. Proven track record in managing and delivering AI/ML projects, with a focus on GenAI solutions, in large-scale enterprise environments. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

At Mandrake Bioworks, we're building a new kind of biotech company. One where plants are programmable systems, and AI is the design engine. Our mission is to reimagine the process of trait discovery and biological design using AI-first methods and multi-omics data. From climate-resilient crops to plants engineered for high-value compound production, our work sits at the intersection of deep-tech, biology, and planetary need. This is not just about applying machine learning to biology. It’s about building novel infrastructure and models that don't exist yet, starting from first principles. You’ll be collaborating closely with a multidisciplinary team of: Some of the best Plant biotechnologists & geneticists in the country Experienced operators in climate, biotech, and venture-backed deep-tech AI/ML engineers and bioinformaticians who’ve built production-grade models and platforms at scale This is a rare chance to work at the confluence of real-world biological complexity and cutting-edge AI research with a team deeply committed to impact and excellence. You’ll work as a founding member of our AI team, helping architect systems that bring plant biology into the machine learning era. Responsibilities include: Build and optimize data pipelines that integrate large-scale, heterogeneous datasets from multiple sources Develop custom LLM workflows and domain-specific foundation models to extract actionable insights Fine-tune open-source language and graph transformer models for NER, entity resolution, and relationship extraction Design systems that blend prompt-engineering, retrieval augmentation, and probabilistic reasoning to manage high-noise data environments Translate cutting-edge research into modular, production-ready AI tools using real-world datasets We’re Looking for someone who’s obsessed with systems-level thinking, loves working at the edge of what's known, and is unafraid of complexity. Must-Haves: Strong experience with Python, PyTorch/TensorFlow, and deep learning fundamentals Familiarity with LLMs, Transformers, or graph-based models Hands-on experience building robust data pipelines and handling multi-modal datasets Passion for biology and willingness to dive deep into genomic/omics datasets Perpetually curious, grounded in first-principles thinking, and unafraid to question how things are done Comfortable with ambiguity, fast learning, and ownership-you care more about solving the problem than defending a method Bonus Points: Experience with scientific literature parsing, NER, or bio-NLP Familiarity with omics standards (e.g., FASTA, VCF, GTF, KEGG, GO terms) Exposure to working with large-scale biological or medical datasets Interest in systems biology, synthetic biology, or computational genomics What You’ll Get: Monthly stipend + pathway to full-time offer Ownership of core systems in a high-ambition deep-tech startup Mentorship from leaders across AI, biology, and engineering Opportunity to co-author papers, build open-source tools, and ship real infrastructure Front row seat into how a deep-tech moonshot is scaled towards a global impact Biology today is where the internet was in the early 2000s - fragmented, messy, and yet full of potential. At Mandrake, we’re building the models, abstractions, and infrastructure to make biology programmable at scale. Your work won’t sit in a research repo - it’ll directly power biological discoveries, product pipelines, and planetary-scale impact. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Experience: 8+ years in building large-scale machine learning solutions and ML Ops practices. Working with LLM APIs and serving LLMs in-house at scale. Technical Expertise: Kubernetes, RDBMS, and API-driven development. Model serving in low-latency, high-throughput use cases Observability, data pipeline design, service scaling, and cost optimization. Code Quality: Strong emphasis on code hygiene, including review, documentation, testing, and CI/CD practices. Programming Skills: Proficiency in Python and PyTorch. Extensive experience with the scientific Python ecosystem. Cloud Development: Proficiency in cloud-native application development. Show more Show less

Posted 6 days ago

Apply

Exploring Pytorch Jobs in India

Pytorch, a popular open-source machine learning library, has gained significant traction in the Indian job market. With the increasing adoption of machine learning and artificial intelligence technologies, there is a growing demand for professionals skilled in Pytorch. Job seekers looking to break into the field of machine learning can explore various opportunities in India.

Top Hiring Locations in India

  1. Bengaluru
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Delhi-NCR

These cities are known for their thriving tech ecosystems and have numerous companies actively hiring for Pytorch roles.

Average Salary Range

The salary range for Pytorch professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the field of Pytorch, a typical career path may start as a Junior Developer, progress to a Senior Developer, then to a Tech Lead. With continuous learning and upskilling, professionals can further advance to roles such as Machine Learning Engineer, Data Scientist, or AI Researcher.

Related Skills

In addition to Pytorch expertise, professionals in this field are often expected to have knowledge or experience in areas such as:

  • Python programming
  • Deep learning concepts
  • Neural network architectures
  • Data preprocessing and manipulation
  • Model evaluation and optimization

Interview Questions

  • What is Pytorch and how does it differ from other deep learning frameworks? (basic)
  • Explain the process of backpropagation in Pytorch. (medium)
  • How would you handle overfitting in a Pytorch model? (medium)
  • What is the difference between a tensor and a variable in Pytorch? (basic)
  • Can you explain the concept of autograd in Pytorch? (medium)
  • What are the advantages of using Pytorch for deep learning projects? (basic)
  • How do you save and load a Pytorch model? (medium)
  • Explain the concept of transfer learning and how it is implemented in Pytorch. (advanced)
  • What is the purpose of the torch.nn module in Pytorch? (basic)
  • How would you debug a Pytorch model that is not converging during training? (medium)
  • What is the significance of torch.optim in Pytorch? (basic)
  • Explain the difference between nn.Module and nn.Sequential in Pytorch. (medium)
  • How can you deploy a Pytorch model for production use? (advanced)
  • What is the role of torch.utils.data.Dataset in Pytorch? (basic)
  • How do you handle missing values in input data when training a Pytorch model? (medium)
  • Explain the concept of a computational graph in Pytorch. (medium)
  • What is the purpose of the torch.cuda module in Pytorch? (basic)
  • How can you visualize the training process and results in Pytorch? (medium)
  • What are the different activation functions available in Pytorch and when would you use each? (advanced)
  • Can you explain the concept of batch normalization and how it improves model training in Pytorch? (medium)
  • How would you fine-tune a pre-trained model in Pytorch for a new dataset? (advanced)
  • What are some common pitfalls to avoid when working with Pytorch? (medium)
  • How do you choose the right loss function for a Pytorch model based on the problem at hand? (medium)
  • Can you walk us through a recent project where you used Pytorch and highlight any challenges you faced? (advanced)

Closing Remark

As you prepare for Pytorch job interviews in India, make sure to brush up on your technical knowledge, showcase your practical experience, and demonstrate your problem-solving skills. With dedication and continuous learning, you can excel in the field of Pytorch and secure exciting career opportunities in the Indian job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies