Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
6 - 8 Lacs
Thiruvananthapuram
On-site
Experience Required: 3-5 years of hands-on experience in full-stack development, system design, and supporting AI/ML data-driven solutions in a production environment. Key Responsibilities Implementing Technical Designs: Collaborate with architects and senior stakeholders to understand high-level designs and break them down into detailed engineering tasks. Implement system modules and ensure alignment with architectural direction. Cross-Functional Collaboration: Work closely with software developers, data scientists, and UI/UX teams to translate system requirements into working code. Clearly communicate technical concepts and implementation plans to internal teams. Stakeholder Support: Participate in discussions with product and client teams to gather requirements. Provide regular updates on development progress and raise flags early to manage expectations. System Development & Integration: Develop, integrate, and maintain components of AI/ML platforms and data-driven applications. Contribute to scalable, secure, and efficient system components based on guidance from architectural leads. Issue Resolution: Identify and debug system-level issues, including deployment and performance challenges. Proactively collaborate with DevOps and QA to ensure resolution. Quality Assurance & Security Compliance: Ensure that implementations meet coding standards, performance benchmarks, and security requirements. Perform unit and integration testing to uphold quality standards. Agile Execution: Break features into technical tasks, estimate efforts, and deliver components in sprints. Participate in sprint planning, reviews, and retrospectives with a focus on delivering value. Tool & Framework Proficiency: Use modern tools and frameworks in your daily workflow, including AI/ML libraries, backend APIs, front-end frameworks, databases, and cloud services, contributing to robust, maintainable, and scalable systems. Continuous Learning & Contribution: Keep up with evolving tech stacks and suggest optimizations or refactoring opportunities. Bring learnings from the industry into internal knowledge-sharing sessions. Proficiency in using AI-copilots for Coding: Adaptation to emerging tools and knowledge of prompt engineering to effectively use AI for day-to-day coding needs. Technical Skills Hands-on experience with Python-based AI/ML development using libraries such as TensorFlow, PyTorch, scikit-learn, or Keras. Hands-on exposure to self-hosted or managed LLMs, supporting integration and fine-tuning workflows as per system needs while following architectural blueprints. Practical implementation of NLP/CV modules using tools like SpaCy, NLTK, Hugging Face Transformers, and OpenCV, contributing to feature extraction, preprocessing, and inference pipelines. Strong backend experience using Django, Flask, or Node.js, and API development (REST or GraphQL). Front-end development experience with React, Angular, or Vue.js, with a working understanding of responsive design and state management. Development and optimization of data storage solutions, using SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra), with hands-on experience configuring indexes, optimizing queries, and using caching tools like Redis and Memcached. Working knowledge of microservices and serverless patterns, participating in building modular services, integrating event-driven systems, and following best practices shared by architectural leads. Application of design patterns (e.g., Factory, Singleton, Observer) during implementation to ensure code reusability, scalability, and alignment with architectural standards. Exposure to big data tools like Apache Spark, and Kafka for processing datasets. Familiarity with ETL workflows and cloud data warehouse, using tools such as Airflow, dbt, BigQuery, or Snowflake. Understanding of CI/CD, containerization (Docker), IaC (Terraform), and cloud platforms (AWS, GCP, or Azure). Implementation of cloud security guidelines, including setting up IAM roles, configuring TLS/SSL, and working within secure VPC setups, with support from cloud architects. Exposure to MLOps practices, model versioning, and deployment pipelines using MLflow, FastAPI, or AWS SageMaker. Configuration and management of cloud services such as AWS EC2, RDS, S3, Load Balancers, and WAF, supporting scalable infrastructure deployment and reliability engineering efforts. Personal Attributes Proactive Execution and Communication: Able to take architectural direction and implement it independently with minimal rework with regular communication with stakeholders Collaboration: Comfortable working across disciplines with designers, data engineers, and QA teams. Responsibility: Owns code quality and reliability, especially in production systems. Problem Solver: Demonstrated ability to debug complex systems and contribute to solutioning. Key: Python, Django, Django ORM, HTML, CSS, Bootstrap, JavaScript, jQuery, Multi-threading, Multi-processing, Database Design, Database Administration, Cloud Infrastructure, Data Science, self-hosted LLMs Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Relevant certifications in cloud or machine learning are a plus. Package: 6-11 LPA Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹800,000.00 per year Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Monday to Friday
Posted 3 weeks ago
0 years
6 - 9 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Consultant, Senio r Data Scientist ! In this role, you will have a strong background in Gen AI implementations, data engineering, developing ETL processes, and utilizing machine learning tools to extract insights and drive business decisions. The Data Scientist will be responsible for analysing large datasets, developing predictive models, and communicating findings to various stakeholders Responsibilities Develop and maintain machine learning models to identify patterns and trends in large datasets. Utilize Gen AI and various LLMs to design & develop production ready use cases. Collaborate with cross-functional teams to identify business problems and develop data-driven solutions. Communicate complex data findings and insights to non-technical stakeholders in a clear and concise manner. Continuously monitor and improve the performance of existing models and processes. Stay up to date with industry trends and advancements in data science and machine learning. Design and implement data models and ETL processes to extract, transform, and load data from various sources. Good hands own experience in AWS bedrock models, Sage maker, Lamda etc Data Exploration & Preparation – Conduct exploratory data analysis and clean large datasets for modeling. Business Strategy & Decision Making – Translate data insights into actionable business strategies. Mentor Junior Data Scientists – Provide guidance and expertise to junior team members. Collaborate with Cross-Functional Teams – Work with engineers, product managers, and stakeholders to align data solutions with business goals. Qualifications we seek in you! Minimum Qualifications Bachelor's / Master's degree in computer science , Statistics, Mathematics, or a related field. Relevant years of experience in a data science or analytics role. Strong proficiency in SQL and experience with data warehousing and ETL processes. Experience with programming languages such as Python & R is a must . (either one ) Familiarity with machine learning tools and libraries such as Pandas, scikit-learn and AI libraries. Having excellent knowledge in Gen AI, RAG, LLM Models & strong understanding of prompt engineering. Proficiency in Az Open AI & AWS Sagemaker implementation. Good understanding statistical techniques such and advanced machine learning Experience with data warehousing and ETL processes. Proficiency in SQL and database management. Familiarity with cloud-based data platforms such as AWS, Azure, or Google Cloud. Experience with Azure ML Studio is desirable. Knowledge of different machine learning algorithms and their applications. Familiarity with data preprocessing and feature engineering techniques. Preferred Qualifications/ Skills Experience with model evaluation and performance metrics. Understanding of deep learning and neural networks is a plus. Certified in AWS Machine learning , AWS Infra engineer is a plus Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 7, 2025, 7:46:19 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 3 weeks ago
6.0 - 9.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation – Senior Full Stack Developer Job Description: We are seeking a highly skilled and experienced Senior Full Stack AI Developer to lead the design, development, and implementation of innovative solutions across both Conversational AI and Generative AI domains. The ideal candidate will possess extensive expertise in full-stack development, including Python, data engineering, JavaScript, and strong proficiency in either ReactJS or Angular, coupled with Node.js. A proven track record in leveraging Generative AI for cutting-edge applications and developing sophisticated chatbot solutions is essential. Key Responsibilities: Lead the end-to-end design, development, and deployment of scalable and robust AI-powered applications, from frontend interfaces to backend services. Architect and implement Generative AI solutions, including Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and agentic frameworks. Develop and integrate complex conversational AI systems and chatbots, focusing on natural language understanding and generation. Design and implement robust backend APIs and data pipelines using Python and Node.js, ensuring high performance and scalability for AI workloads. Develop interactive, intuitive, and user-friendly front-end interfaces using either ReactJS or Angular, ensuring seamless user experiences with AI capabilities. Collaborate with AI Researchers, Data Scientists, and other stakeholders to translate complex AI models and research into deployable, production-ready applications. Write clean, maintainable, and high-quality code, conducting thorough testing, debugging, and refinement to ensure optimal performance and reliability. Stay abreast of the latest advancements and trends in Generative AI, Conversational AI, and full-stack technologies, applying them to enhance our product offerings. Provide technical leadership, mentorship, and conduct comprehensive code reviews for junior developers, fostering a culture of technical excellence. Qualifications: Bachelor's degree in Computer Science, Artificial Intelligence, Data Science, or a related quantitative field. Minimum 6-9 years of progressive experience in full-stack software development, with at least 3-4 years specifically on AI-driven projects (Conversational AI and Generative AI). Demonstrable experience building and deploying Generative AI solutions (e.g., RAG pipelines, fine-tuning LLMs, prompt engineering, using frameworks like LangChain or LlamaIndex). Strong practical experience with conversational AI platforms and developing chatbots. Advanced proficiency in Python for AI/ML development and backend services. Advanced proficiency in JavaScript/TypeScript, with strong, demonstrable experience with either ReactJS or Angular, and server-side programming with Node.js. Proven experience with cloud platforms, preferably Azure (e.g., Azure OpenAI, Azure ML, Azure AI Search, Azure Functions, Azure Web Apps). Solid understanding and experience with database technologies, including both relational databases (e.g., SQL Server, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cosmos DB, Vector Databases). Experience with API design and development (RESTful APIs). Deep understanding of NLP, ML algorithms, and deep learning techniques relevant to AI applications. Familiarity with data preprocessing, feature engineering, and MLOps practices. Strong problem-solving, analytical, and architectural design skills. Excellent communication, collaboration, and presentation skills for effective interaction with technical and non-technical teams. Experience with Agile development methodologies and proficiency with version control systems (e.g., Git). A strong portfolio demonstrating successful full-stack projects involving Conversational AI and/or Generative AI is highly desirable. Skills: Programming Languages: Python (advanced), JavaScript (advanced), TypeScript (required for Angular). Frontend Frameworks: Strong proficiency in either ReactJS or Angular. Backend Technologies: Node.js (strong proficiency), FastAPI/Flask/Django (Python frameworks). Cloud Platforms: Azure (preferred: Azure OpenAI, Azure ML, Azure Functions, Azure App Services, Azure Blob Storage, Cosmos DB). AI/ML Concepts & Frameworks: Generative AI: LLMs, Transformers, Prompt Engineering, Retrieval-Augmented Generation (RAG). Conversational AI: NLU, NLG, Intent Recognition, Entity Extraction. Libraries/Frameworks: LangChain, LlamaIndex, Hugging Face, TensorFlow, PyTorch. Databases: Relational (SQL Server, PostgreSQL), NoSQL (MongoDB, Cosmos DB), Vector Databases (e.g., Pinecone, Azure AI Search/Cognitive Search for vector capabilities). API Development: RESTful APIs, Web Services. Data Engineering: Data pipelines, ETL/ELT concepts for AI data. DevOps & MLOps: CI/CD pipelines, Git, Docker, Kubernetes (familiarity with deployment of AI models). Soft Skills: Leadership, problem-solving, analytical thinking, communication, collaboration, adaptability. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 weeks ago
0 years
0 Lacs
Chennai
Remote
Job Summary This internship duration is only 6 months in real time projects. No Compensation/Salary/Stipend. Candidates willing to work from our Chennai office only, please apply Strong data analytics knowledge, data analytics course/certification is must. You are self-starter who takes complete ownership of your projects Should have the best problem solving attitude. Ability to work well independently with minimal direction and lead team. Good written and verbal English communication skills Technical expertise regarding data models, database design development, data mining and segmentation techniques. Must have strong programming skills, particularly in Python. Knowledge of LLM frameworks such as Langchain, LlamaIndex, OpenAGI, CrewAI, AutoGen. Knowledge of Data Engineering Practices: Skills in data preprocessing, cleaning, and transformation etc. Should be familiar with Agents, RAG (retrieval augmented generation), COT prompting techniques etc Strong analytical skills to troubleshoot and solve complex problems that arise in AI application development. Responsibilities and Duties Implement and customize LLMs to build reliable applications based on LLMs. Build reliable GenAI or LLM powered applications Manage data ingestion and maintain database integrity Knows how to build Agent from scratch and understanding of Multi-Agent architecture is must. Design and develop GenAI use cases across financial services, healthcare and education Implement Retrieval-Augmented Generation (RAG) techniques for enhanced LLM responses. Ensure best practices of prompt engineering, performance monitoring, evaluation of LLM responses etc Work with cross-functional teams for cohesive GenAI solutions. Keep abreast of the latest advancements in LLMs and generative AI. Important Notes/Instructions WFH is not available. Candidates willing to work from our Chennai office only please apply. This internship duration is only 5 months in real time projects. No salary/stipend will be provided. Key Skills Python, LLMs, RAG, AI Agents Job Type: Full-time Pay: ₹1,000.00 - ₹5,000.00 per month Schedule: Monday to Friday Application Question(s): Willing to work without any pay as intern? Willing to work from Chennai Office for 5 months? Location: Chennai, Tamil Nadu (Required) Work Location: In person
Posted 3 weeks ago
4.0 years
8 - 12 Lacs
India
On-site
Job Description We are seeking a skilled and passionate Machine Learning Engineer or AI Model Developer with a minimum of 4 years of hands-on experience in building, training, and deploying custom machine learning models. The ideal candidate is someone who thrives on solving real-world problems using custom-built AI models, rather than relying solely on pre-built solutions or third-party APIs. Natural Abilities Smart, self motivated, responsible and out of the box thinker. Detailed oriented and powerful analyzer. Great writing and communication skills. Requirements: • 4+ years of experience designing, developing, and deploying custom machine learning models (not just integrating APIs) Strong proficiency in Python and ML libraries such as NumPy, pandas, scikit-learn, etc. Expertise in ML frameworks like TensorFlow, PyTorch, Keras, or equivalent. Solid understanding of ML algorithms, model evaluation techniques, and feature engineering. Experience in data preprocessing, model optimization, and hyperparameter tuning. Hands-on experience with real-world dataset training and fine-tuning. Experience in using Amazon SageMaker for model development, training, deployment, and monitoring. Familiarity with other cloud-based ML platforms (AWS, GCP, or Azure) is a plus. Responsibilities: • Design, develop, and deploy custom machine learning models tailored to business use cases. Train, validate, and optimize models using real-world datasets and advanced techniques. Build scalable, production-ready ML pipelines from data ingestion to deployment. Leverage AWS SageMaker to streamline model training, testing, and deployment workflows. Work closely with product and engineering teams to integrate models into applications. Evaluate models using appropriate metrics and continuously improve performance. Maintain proper documentation of experiments, workflows, and outcomes. Stay up to date with the latest ML research, tools, and best practices. Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,200,000.00 per year Schedule: Day shift Monday to Friday Experience: ML/DS: 5 years (Required) Location: Adajan, Surat, Gujarat (Preferred) Work Location: In person Expected Start Date: 20/07/2025
Posted 3 weeks ago
6.0 - 9.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation – Senior Full Stack Developer Job Description: We are seeking a highly skilled and experienced Senior Full Stack AI Developer to lead the design, development, and implementation of innovative solutions across both Conversational AI and Generative AI domains. The ideal candidate will possess extensive expertise in full-stack development, including Python, data engineering, JavaScript, and strong proficiency in either ReactJS or Angular, coupled with Node.js. A proven track record in leveraging Generative AI for cutting-edge applications and developing sophisticated chatbot solutions is essential. Key Responsibilities: Lead the end-to-end design, development, and deployment of scalable and robust AI-powered applications, from frontend interfaces to backend services. Architect and implement Generative AI solutions, including Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and agentic frameworks. Develop and integrate complex conversational AI systems and chatbots, focusing on natural language understanding and generation. Design and implement robust backend APIs and data pipelines using Python and Node.js, ensuring high performance and scalability for AI workloads. Develop interactive, intuitive, and user-friendly front-end interfaces using either ReactJS or Angular, ensuring seamless user experiences with AI capabilities. Collaborate with AI Researchers, Data Scientists, and other stakeholders to translate complex AI models and research into deployable, production-ready applications. Write clean, maintainable, and high-quality code, conducting thorough testing, debugging, and refinement to ensure optimal performance and reliability. Stay abreast of the latest advancements and trends in Generative AI, Conversational AI, and full-stack technologies, applying them to enhance our product offerings. Provide technical leadership, mentorship, and conduct comprehensive code reviews for junior developers, fostering a culture of technical excellence. Qualifications: Bachelor's degree in Computer Science, Artificial Intelligence, Data Science, or a related quantitative field. Minimum 6-9 years of progressive experience in full-stack software development, with at least 3-4 years specifically on AI-driven projects (Conversational AI and Generative AI). Demonstrable experience building and deploying Generative AI solutions (e.g., RAG pipelines, fine-tuning LLMs, prompt engineering, using frameworks like LangChain or LlamaIndex). Strong practical experience with conversational AI platforms and developing chatbots. Advanced proficiency in Python for AI/ML development and backend services. Advanced proficiency in JavaScript/TypeScript, with strong, demonstrable experience with either ReactJS or Angular, and server-side programming with Node.js. Proven experience with cloud platforms, preferably Azure (e.g., Azure OpenAI, Azure ML, Azure AI Search, Azure Functions, Azure Web Apps). Solid understanding and experience with database technologies, including both relational databases (e.g., SQL Server, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cosmos DB, Vector Databases). Experience with API design and development (RESTful APIs). Deep understanding of NLP, ML algorithms, and deep learning techniques relevant to AI applications. Familiarity with data preprocessing, feature engineering, and MLOps practices. Strong problem-solving, analytical, and architectural design skills. Excellent communication, collaboration, and presentation skills for effective interaction with technical and non-technical teams. Experience with Agile development methodologies and proficiency with version control systems (e.g., Git). A strong portfolio demonstrating successful full-stack projects involving Conversational AI and/or Generative AI is highly desirable. Skills: Programming Languages: Python (advanced), JavaScript (advanced), TypeScript (required for Angular). Frontend Frameworks: Strong proficiency in either ReactJS or Angular. Backend Technologies: Node.js (strong proficiency), FastAPI/Flask/Django (Python frameworks). Cloud Platforms: Azure (preferred: Azure OpenAI, Azure ML, Azure Functions, Azure App Services, Azure Blob Storage, Cosmos DB). AI/ML Concepts & Frameworks: Generative AI: LLMs, Transformers, Prompt Engineering, Retrieval-Augmented Generation (RAG). Conversational AI: NLU, NLG, Intent Recognition, Entity Extraction. Libraries/Frameworks: LangChain, LlamaIndex, Hugging Face, TensorFlow, PyTorch. Databases: Relational (SQL Server, PostgreSQL), NoSQL (MongoDB, Cosmos DB), Vector Databases (e.g., Pinecone, Azure AI Search/Cognitive Search for vector capabilities). API Development: RESTful APIs, Web Services. Data Engineering: Data pipelines, ETL/ELT concepts for AI data. DevOps & MLOps: CI/CD pipelines, Git, Docker, Kubernetes (familiarity with deployment of AI models). Soft Skills: Leadership, problem-solving, analytical thinking, communication, collaboration, adaptability. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 weeks ago
6.0 - 9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation – Senior Full Stack Developer Job Description: We are seeking a highly skilled and experienced Senior Full Stack AI Developer to lead the design, development, and implementation of innovative solutions across both Conversational AI and Generative AI domains. The ideal candidate will possess extensive expertise in full-stack development, including Python, data engineering, JavaScript, and strong proficiency in either ReactJS or Angular, coupled with Node.js. A proven track record in leveraging Generative AI for cutting-edge applications and developing sophisticated chatbot solutions is essential. Key Responsibilities: Lead the end-to-end design, development, and deployment of scalable and robust AI-powered applications, from frontend interfaces to backend services. Architect and implement Generative AI solutions, including Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and agentic frameworks. Develop and integrate complex conversational AI systems and chatbots, focusing on natural language understanding and generation. Design and implement robust backend APIs and data pipelines using Python and Node.js, ensuring high performance and scalability for AI workloads. Develop interactive, intuitive, and user-friendly front-end interfaces using either ReactJS or Angular, ensuring seamless user experiences with AI capabilities. Collaborate with AI Researchers, Data Scientists, and other stakeholders to translate complex AI models and research into deployable, production-ready applications. Write clean, maintainable, and high-quality code, conducting thorough testing, debugging, and refinement to ensure optimal performance and reliability. Stay abreast of the latest advancements and trends in Generative AI, Conversational AI, and full-stack technologies, applying them to enhance our product offerings. Provide technical leadership, mentorship, and conduct comprehensive code reviews for junior developers, fostering a culture of technical excellence. Qualifications: Bachelor's degree in Computer Science, Artificial Intelligence, Data Science, or a related quantitative field. Minimum 6-9 years of progressive experience in full-stack software development, with at least 3-4 years specifically on AI-driven projects (Conversational AI and Generative AI). Demonstrable experience building and deploying Generative AI solutions (e.g., RAG pipelines, fine-tuning LLMs, prompt engineering, using frameworks like LangChain or LlamaIndex). Strong practical experience with conversational AI platforms and developing chatbots. Advanced proficiency in Python for AI/ML development and backend services. Advanced proficiency in JavaScript/TypeScript, with strong, demonstrable experience with either ReactJS or Angular, and server-side programming with Node.js. Proven experience with cloud platforms, preferably Azure (e.g., Azure OpenAI, Azure ML, Azure AI Search, Azure Functions, Azure Web Apps). Solid understanding and experience with database technologies, including both relational databases (e.g., SQL Server, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cosmos DB, Vector Databases). Experience with API design and development (RESTful APIs). Deep understanding of NLP, ML algorithms, and deep learning techniques relevant to AI applications. Familiarity with data preprocessing, feature engineering, and MLOps practices. Strong problem-solving, analytical, and architectural design skills. Excellent communication, collaboration, and presentation skills for effective interaction with technical and non-technical teams. Experience with Agile development methodologies and proficiency with version control systems (e.g., Git). A strong portfolio demonstrating successful full-stack projects involving Conversational AI and/or Generative AI is highly desirable. Skills: Programming Languages: Python (advanced), JavaScript (advanced), TypeScript (required for Angular). Frontend Frameworks: Strong proficiency in either ReactJS or Angular. Backend Technologies: Node.js (strong proficiency), FastAPI/Flask/Django (Python frameworks). Cloud Platforms: Azure (preferred: Azure OpenAI, Azure ML, Azure Functions, Azure App Services, Azure Blob Storage, Cosmos DB). AI/ML Concepts & Frameworks: Generative AI: LLMs, Transformers, Prompt Engineering, Retrieval-Augmented Generation (RAG). Conversational AI: NLU, NLG, Intent Recognition, Entity Extraction. Libraries/Frameworks: LangChain, LlamaIndex, Hugging Face, TensorFlow, PyTorch. Databases: Relational (SQL Server, PostgreSQL), NoSQL (MongoDB, Cosmos DB), Vector Databases (e.g., Pinecone, Azure AI Search/Cognitive Search for vector capabilities). API Development: RESTful APIs, Web Services. Data Engineering: Data pipelines, ETL/ELT concepts for AI data. DevOps & MLOps: CI/CD pipelines, Git, Docker, Kubernetes (familiarity with deployment of AI models). Soft Skills: Leadership, problem-solving, analytical thinking, communication, collaboration, adaptability. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Technical skills and competencies Must have Technical skills Requirements • Overall 8+ years of experience, out of which in 4+ in AI, ML and Gen AI and related technologies • Proven track record of leading and scaling AI/ML teams and initiatives. • Strong understanding and hands-on experience in AI, ML, Deep Learning, and Generative AI concepts and applications. • Expertise in ML frameworks such as PyTorch and/or TensorFlow • Experience with ONNX runtime, model optimization and hyperparameter tuning. • Solid Experience of DevOps, SDLC, CI/CD, and MLOps practices - DevOps/MLOps Tech Stack: Docker, Kubernetes, Jenkins, Git, CI/CD, RabbitMQ, Kafka, Spark, Terraform, Ansible, Prometheus, Grafana, ELK stack • Experience in production-level deployment of AI models at enterprise scale. • Proficiency in data preprocessing, feature engineering, and large-scale data handling. • Expertise in image and video processing, object detection, image segmentation, and related CV tasks. • Proficiency in text analysis, sentiment analysis, language modeling, and other NLP applications. • Experience with speech recognition, audio classification, and general signal processing techniques. • Experience with RAG, VectorDB, GraphDB and Knowledge Graphs • Extensive experience with major cloud platforms (AWS, Azure, GCP) for AI/ML deployments. Proficiency in using and integrating cloud-based AI services and tools (e.g., AWS SageMaker, Azure ML, Google Cloud AI). Soft Skills • Strong leadership and team management skills. • Excellent verbal and written communication skills. • Strategic thinking and problem-solving abilities. • Adaptability and adapting to the rapidly evolving AI/ML landscape. • Strong collaboration and interpersonal skills. • Ability to translate market needs into technological solutions. • Strong understanding of industry dynamics and ability to translate market needs into technological solutions. • Demonstrated ability to foster a culture of innovation and creative problem-solving. Candidate Profile and Competencies Lead and manage the AI/ML Center of Excellence (CoE), setting strategic direction and goals. • !00% hands-on Experience is a must in development and deployment of production-level AI models at enterprise scale. (Build Vs Buy Decision Maker) • Drive innovation in AI/ML applications across various business domains and modalities (vision, language, audio). • Hire, train, and manage a team of AI experts to run the CoE effectively. • Collaborate with sales teams to identify opportunities and drive revenue growth through AI/ML solutions. • Develop and deliver training programs to upskill teams across the organization in AI/ML technologies. • Oversee the Implement and maintain best practices in MLOps, DevOps, and CI/CD for AI/ML projects. • Continuously identify and evaluate emerging tools and technologies in the AI/ML space for potential adoption. • Provide thought leadership through whitepapers, conference presentations, and industry engagements to position the organization as an AI/ML leader Experience 8+ yrs
Posted 3 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence tools and cloud AI services. Your typical day will involve collaborating with cross-functional teams to design and implement production-ready solutions, ensuring that the applications meet high-quality standards. You will also explore innovative approaches to integrate generative AI models into various projects, contributing to advancements in deep learning, neural networks, and other AI technologies. Your role will require a balance of technical expertise and creative problem-solving to address complex challenges in the field of artificial intelligence. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with team members to design and implement AI-driven applications. - Analyze project requirements and develop effective strategies for AI model integration. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of deep learning frameworks such as TensorFlow or PyTorch. - Experience with cloud platforms and services for AI deployment. - Familiarity with data preprocessing and feature engineering techniques. - Knowledge of natural language processing and its applications. Additional Information: - The candidate should have minimum 3 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full time education is required. 15 years full time education
Posted 3 weeks ago
10.0 - 13.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Senior Technical Lead - Generative AI Location: Noida Job Type: Full-Time Band: E3.2 Experience: 10-13 years (with at least 1 year in Generative AI) Key Responsibilities: Lead the design, development, and deployment of Generative AI models and solutions on cloud platforms such as Microsoft Azure, AWS, or GCP. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Drive the end-to-end lifecycle of AI/ML projects, from data collection and preprocessing to model training, evaluation, and deployment. Mentor and guide junior team members, fostering a culture of continuous learning and innovation. Stay updated with the latest advancements in Generative AI and AI/ML technologies, and integrate them into ongoing projects. Optimize and fine-tune AI models for performance, scalability, and reliability. Develop and maintain comprehensive documentation for all AI/ML projects and processes. Qualifications: 10-13 years overall IT experience. Out of which 3-4 years hands-on experience in AI/ML, and at least 1 year in Generative AI. Strong proficiency in Python programming and experience with AI/ML frameworks, models and algorithms. Strong experience with leading LLMs, RAG, agentic automation, and agent frameworks like Lang Chain, Lang Graph, AutoGen, AutoGPT for Generative AI use-cases. Hands-on experience in Gen AI observability & monitoring via tools like Lang Smith Experience with deep learning, CNN, NLP, text-to-speech, speech-to-text, OCR, and image processing techniques. Experience in Infrastructure related use-cases leveraging Autonomous Agents and/or agentic platform. Should have hands-on experience building Autonomous agents for Gen AI related use-cases. Extensive experience with cloud platforms (Microsoft Azure/AWS/GCP/IBM WatsonX) and their AI/ML services. Extensive experience in Prompt Engineering and ability to design and build Prompt libraries for various scenarios applicable in Infrastructure. Thorough understanding of Responsible AI best practices and standards and experience of implementing them in Gen AI related projects Proven track record of leading and delivering complex AI/ML projects. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication and leadership skills, with the ability to articulate technical concepts to non-technical stakeholders. Preferred Skills: Experience with natural language processing (NLP) and computer vision. Familiarity with MLOps practices and tools. Knowledge of data engineering and big data technologies.
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities: Perform high-quality mesh generation for structural analysis using Hypermesh and Patran. Develop and refine GFEM and DFEM models for various components and assemblies, ensuring mesh quality and compliance with CAE standards. Prepare models for static, dynamic, and thermal FEA simulations. Interpret and apply engineering drawings and CAD data to build accurate finite element models. Ensure mesh quality and convergence to meet analysis requirements. Ensure adherence to customer-specific FEM guidelines and industry best practices. Collaborate closely with design, simulation, and test teams to validate and iterate FE models. Document meshing processes, model assumptions, and preprocessing methodologies. Troubleshoot and resolve issues related to meshing and model setup. Familiarity with optimization techniques for FE modeling. Required Skills & Qualifications: Bachelor's or Master's degree in Mechanical, Aerospace, or related engineering field. 3 to 7 years of relevant CAE experience, with a focus on meshing and preprocessing. Strong command of Hypermesh and Patran for shell and solid meshing. Hands-on experience in GFEM and DFEM meshing techniques and standards. Solid understanding of FEA fundamentals, material properties, and boundary conditions. Familiarity with CAD software (e.g., CATIA, NX) for model extraction and cleanup. Excellent problem-solving and communication skills. Ability to manage multiple tasks and meet tight deadlines. Proactive attitude towards continuous improvement and learning. Preferred: Exposure to solver environments like OPTISTRUCT, NASTRAN, ABAQUS for FE model set-up Experience with scripting or automation (e.g., TCL, Python) in preprocessing. Knowledge of aerospace or automotive domain-specific CAE workflows.
Posted 3 weeks ago
0 years
0 Lacs
New Delhi, Delhi, India
On-site
Role Overview : with a focus on AI, you will be responsible for developing scalable AI solutions, optimizing algorithms, and collaborating with cross-functional teams to enhance our product offerings. You will leverage your expertise in Python and machine learning frameworks to build and deploy AI models that drive business outcomes. Key Responsibilities: Skills required: Flask, Fast API , LAMBDA, AI-ML, LLM, Lambda, CNN, YOLO, OpenCV. Design, develop, and implement AI models and algorithms using Python. Collaborate with data scientists and other engineers to integrate AI capabilities into our products. Optimize and maintain existing AI systems to improve performance and accuracy. Conduct data preprocessing and feature engineering to prepare datasets for model training. Stay current with the latest advancements in AI technologies and methodologies. Write clean, maintainable, and efficient code while following best practices in software development. Participate in code reviews and contribute to team knowledge sharing. About US -: Webmobril is One of the top-notch IT companies based in Delhi, NCR; India and also established in the US. Offering exclusive and affordable Web, Mobile, and Game app development, Cyber Security Assessment, Digital Marketing services globally. Recently we started Staffing Services and Travel & Tourism services . We are a team of experienced, dedicated enthusiastic, innovative, and creative professionals to serve a range of business goals with our advanced tools and technologies. For more detail you can go through our company website: https://www.webmobril.com/
Posted 3 weeks ago
0.0 years
0 - 0 Lacs
Chennai, Tamil Nadu
Remote
Job Summary This internship duration is only 6 months in real time projects. No Compensation/Salary/Stipend. Candidates willing to work from our Chennai office only, please apply Strong data analytics knowledge, data analytics course/certification is must. You are self-starter who takes complete ownership of your projects Should have the best problem solving attitude. Ability to work well independently with minimal direction and lead team. Good written and verbal English communication skills Technical expertise regarding data models, database design development, data mining and segmentation techniques. Must have strong programming skills, particularly in Python. Knowledge of LLM frameworks such as Langchain, LlamaIndex, OpenAGI, CrewAI, AutoGen. Knowledge of Data Engineering Practices: Skills in data preprocessing, cleaning, and transformation etc. Should be familiar with Agents, RAG (retrieval augmented generation), COT prompting techniques etc Strong analytical skills to troubleshoot and solve complex problems that arise in AI application development. Responsibilities and Duties Implement and customize LLMs to build reliable applications based on LLMs. Build reliable GenAI or LLM powered applications Manage data ingestion and maintain database integrity Knows how to build Agent from scratch and understanding of Multi-Agent architecture is must. Design and develop GenAI use cases across financial services, healthcare and education Implement Retrieval-Augmented Generation (RAG) techniques for enhanced LLM responses. Ensure best practices of prompt engineering, performance monitoring, evaluation of LLM responses etc Work with cross-functional teams for cohesive GenAI solutions. Keep abreast of the latest advancements in LLMs and generative AI. Important Notes/Instructions WFH is not available. Candidates willing to work from our Chennai office only please apply. This internship duration is only 5 months in real time projects. No salary/stipend will be provided. Key Skills Python, LLMs, RAG, AI Agents Job Type: Full-time Pay: ₹1,000.00 - ₹5,000.00 per month Schedule: Monday to Friday Application Question(s): Willing to work without any pay as intern? Willing to work from Chennai Office for 5 months? Location: Chennai, Tamil Nadu (Required) Work Location: In person
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role: NLP Engineer Location- Hybrid (Gurgaon) JOB DESCRIPTION Mandate skills : NLP experience min 2years experience Familiarity with any of agentic frameworks (LangChain, Semantic Kernel, AutoGen, CrewAI or similar). Comfortable integrating with REST APIs, SQL/NoSQL databases, and basic backend logic. Good to Have : Proficiency in programming languages such as .NET or Python, with experience in NLP libraries. Job Description : Bachelor’s or Master’s degree in Computer Science, Linguistics, Data Science, AI or a related field. · 2+ years of hands-on experience working with NLP or LLMs in a production or applied research setting · Solid understanding of NLP concepts — embeddings, tokenization, entity recognition, summarization, etc. · Experience with modern LLMs and foundational models (OpenAI, Hugging Face, etc.). · Familiarity with agentic frameworks (LangChain, Semantic Kernel, AutoGen, CrewAI or similar). · Comfortable integrating with REST APIs, SQL/NoSQL databases, and basic backend logic. · Experience with prompt engineering and chaining LLM calls into workflows. · Strong problem-solving skills and the ability to architect solutions without excessive boilerplate code. · Bonus: Experience with vector databases (e.g., Pinecone, Weaviate), RAG pipelines, or fine-tuning models. Nice to Have · Knowledge of Python and working with AI SDKs. Qualifications: · Bachelor’s or Master’s degree in Computer Science, Linguistics, Data Science, or a related field. · 2/3+ years of experience in natural language processing or a related field. · Proficiency in programming languages such as .NET or Python, with experience in NLP libraries. · Experience with text preprocessing techniques and tools. · Familiarity with cloud platforms and services for deploying/configuring the Gen AI Bot . · Excellent problem-solving skills and attention to detail. · Strong communication skills and the ability to work collaboratively in a team environment
Posted 3 weeks ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description Summary Job Description We are the makers of possible BD is one of the largest global medical technology companies in the world. Advancing the world of health ™ is our Purpose, and it’s no small feat. It takes the imagination and passion of all of us—from design and engineering to the manufacturing and marketing of our billions of MedTech products per year—to look at the impossible and find transformative solutions that turn dreams into possibilities. Why Join Us? A career at BD means learning and working alongside inspirational leaders and colleagues who are equally passionate and committed to fostering an inclusive, growth-centered, and rewarding culture. You will have the opportunity to help shape the trajectory of BD while leaving a legacy at the same time. To find purpose in the possibilities, we need people who can see the bigger picture, who understand the human story that underpins everything we do. We welcome people with the imagination and drive to help us reinvent the future of health. At BD, you’ll discover a culture in which you can learn, grow and thrive. And find satisfaction in doing your part to make the world a better place. Become a maker of possible with us! Responsibilities Reporting to the Director of People Analytics, you will work with the team to develop and maintain data visualization tools that provide insightful and actionable people and business metrics. Design and develop aesthetically pleasing, effective, and compelling dashboards using techniques for advanced analytics, interactive dashboard design, and visual best practices to convey the story within the data and generate actionable insight Provide ongoing operational talent reports to the business with data driven narratives Engage with stakeholders across the People team and business to understand and define requirements and provide support for scheduled and ad hoc talent reporting needs in a timely fashion Develop, implement, and maintain dashboards, reports, and data visualizations for primary stakeholder groups (Executive Leadership Team, Human Resources Leaders & Business Partners, etc.). Display an innovative mindset to expand the team’s capabilities by automating and streamlining current processes Responsible for data cleansing, validation & testing, and performing analysis on large volumes of data. Demonstrate exceptional judgment and discretion when dealing with highly sensitive human capital data and high-profile business strategies. Works with project teams and associates to ensure proper identification of all impacted data elements based on change(s) being proposed. Candidate Profile Bachelor's degree in Engineering, Computer Science, Mathematics or related technical discipline 3+ years' experience in an analytics position working with large amounts of data. “Self-motivated, curious, autonomous and willing to learn” Experience in Analytics data modelling, storytelling, visualization (Python/R, Power Query, Power BI, SQL, Visual Basic) would be must. Experience in Workday reporting, Visier, HR data is highly desired. Carrying out preprocessing of structured and unstructured data Ability to work with a virtual team across different time zones. Advanced skills in MS Excel, PowerPivot, Power BI, Power Platform and DAX. Proficient in SQL Server and MS Azure Broad industry knowledge of HR processes and data (ex. Workforce Metrics, Recruitment Practices, Compensation, Performance/Talent Management, etc.) People Management experience would be plus Knowledge of ERP, Business systems, HR System and Business Intelligence tools. Candidate should be willing cope with some stress and able to prioritize with help of management Click on apply if this sounds like you! Becton, Dickinson and Company is an Equal Opportunity/Affirmative Action Employer. We do not unlawfully discriminate on the basis of race, color, religion, age, sex, creed, national origin, ancestry, citizenship status, marital or domestic or civil union status, familial status, affectional or sexual orientation, gender identity or expression, genetics, disability, military eligibility or veteran status, or any other protected status. To learn more about BD visit: https://bd.com/careers Required Skills Optional Skills Primary Work Location IND Gurgaon - Signature Towers B Additional Locations Work Shift
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Skill required: Insight Engine - Analytics Insights Designation: Analytics and Modeling Analyst Qualifications: Any Graduation Years of Experience: 3 to 5 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? We are seeking a Machine Learning Data Scientist with 2 to 3 years of hands-on experience to join our growing Insights and Analytics team. The ideal candidate will be proficient in building ML models using supervised and unsupervised learning techniques, comfortable working with Python-based ML libraries, and experienced in data preparation workflows. Familiarity with Microsoft’s AI/ML ecosystem is a strong advantage. What are we looking for? Preferred (Strong Advantage): Hands-on experience with Azure Machine Learning Studio and Automated ML. Familiarity with Azure Cognitive Services for vision, language, and decision tasks. Experience working with Microsoft Fabric and Synapse ML integration. Qualifications: Bachelor s or Master’s degree in Computer Science, Data Science, Engineering, or related field. 2–3 years of experience in applied machine learning and data science projects. Solid understanding of model training, validation, and deployment workflows. Experience working with version control (e.g., Git) and collaborative development environments. Nice to Have: Familiarity with tools that help automate and manage the process of building, testing, and deploying machine learning models (MLOps and CI/CD pipelines). Basic understanding of cloud-based architecture and APIs. Roles and Responsibilities: Develop and implement machine learning models using supervised and unsupervised techniques. Apply appropriate machine learning algorithms for prediction tasks (e.g., linear regression, decision trees, neural networks) and data segmentation (e.g., k-means clustering, hierarchical clustering) based on project needs. Perform data preprocessing, feature engineering, and model evaluation. Use Python libraries such as scikit-learn, TensorFlow, and others to build and test models. Analyse and manipulate structured datasets using notebooks (e.g., Jupyter). Collaborate with data engineers, analysts, and product teams to translate business problems into ML solutions. Document model performance, data flows, and process pipelines.
Posted 3 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About KnowBe4 KnowBe4, the provider of the world's largest security awareness training and simulated phishing platform, is used by tens of thousands of organizations around the globe. KnowBe4 enables organizations to manage the ongoing problem of social engineering by helping them train employees to make smarter security decisions, every day. Fortune has ranked us as a best place to work for women, for millennials, and in technology for four years in a row! We have been certified as a "Great Place To Work" in 8 countries, plus we've earned numerous other prestigious awards, including Glassdoor's Best Places To Work. Our team values radical transparency, extreme ownership, and continuous professional development in a welcoming workplace that encourages all employees to be themselves. Whether working remotely or in-person, we strive to make every day fun and engaging; from team lunches to trivia competitions to local outings, there is always something exciting happening at KnowBe4. Please submit your resume in English. Are you a forward-thinking data scientist, poised to lead with innovation? At KnowBe4, you'll be able to shape a career as distinctive as your expertise, supported by our global reach, inclusive ethos, and cutting-edge technology. As a Data Scientist, you'll be at the forefront of crafting impactful, data-driven solutions, collaborating with talented teams in a dynamic, fast-paced environment. Join us in creating an extraordinary path for your professional growth and making a meaningful impact in the working world. Data scientists design data modeling processes, create algorithms and predictive models to be used by software engineers for developing new and exciting products for KnowBe4’s customers, alongside other engineers in a fast-paced, agile development environment. Responsibilities Research, design, and implement Machine Learning, Deep Learning algorithms to solve complex problems Communicate complex concepts and statistical models to non-technical audiences through data visualizationsPerforms statistical analysis and using results to improve models Identify opportunities and formulate data science / machine learning projects to optimize business impact Serve as a subject matter expert in data science and analytics research, and adopt the new tooling and methodologies in Knowbe4 Manage the release, maintenance, and enhancement of machine learning solutions in a production environment via multiple deployment options such as APIs, embedded software, or stand-alone applications Advise various teams on Machine Learn Practices and ensure the highest quality and compliance standards for ML deployments Design and develop cyber security awareness products and features using Generative AI, machine learning, deep learning, and other data ecosystem technologies. Collaborate with cross-functional teams to identify data-related requirements, design appropriate NLP experiments, and conduct in-depth analyses to derive actionable insights from unstructured data sources. Staying updated with the latest advancements in machine learning, deep learning, and generative AI through self-learning and professional development. Research, design, and implement Machine Learning, Deep Learning algorithms to solve complex problems Communicate complex concepts and statistical models to non-technical audiences through data visualizationsPerforms statistical analysis and using results to improve models Identify opportunities and formulate data science / machine learning projects to optimize business impact Serve as a subject matter expert in data science and analytics research, and adopt the new tooling and methodologies in Knowbe4 Manage the release, maintenance, and enhancement of machine learning solutions in a production environment via multiple deployment options such as APIs, embedded software, or stand-alone applications Advise various teams on Machine Learn Practices and ensure the highest quality and compliance standards for ML deployments Requirements BS or equivalent plus 10 years experience MS or equivalent plus 5 years experience Ph.D. or equivalent plus 4 years experience Expertise working experience with programming languages like Python, R, and SQL Solid understanding of statistics, probability, and machine learning 10+ years of relevant experience in designing ML/DL/GenAI systems Expertise in rolling out Generative AI SAAS product and features. Expertise in AWS ecosystem. Proficiency in machine learning algorithms and techniques, including supervised and unsupervised learning, classification, regression, clustering, and dimensionality reduction. Strong understanding and practical experience with deep learning frameworks such as TensorFlow or PyTorch. Ability to design, train, and optimize deep neural networks for various tasks like image recognition, natural language processing, and recommendation systems. Knowledge and experience in generative models like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Ability to create and use generative models for tasks such as image generation, text generation, and data synthesis. Exposure to LLMs, Transformers, and a few technologies like Langchain, Llamaindex, Pinecone, Sagemaker Jumpstart, Chatgpt, AWS Bedrock, and VertexAI. Strong data manipulation skills, including data cleaning, preprocessing, and feature engineering. Experience with data manipulation libraries like Pandas. Ability to create compelling data visualizations using tools like Matplotlib or Seaborn to communicate insights effectively. Proficiency in NLP techniques for text analysis, sentiment analysis, entity recognition, and topic modeling. Strong understanding of data classification, sensitivity, PII, and personal data modeling techniques Experience in model evaluation and validation techniques, including cross-validation, hyperparameter tuning, and performance metrics selection. Proficiency in version control systems like Git for tracking and managing code changes. Strong communication skills to convey complex findings and insights to both technical and non-technical stakeholders. Ability to work collaboratively in cross-functional teams. Excellent problem-solving skills to identify business challenges and devise data-driven solutions. Nice To Have Experience in designing data pipelines and products for real-world applications Experience with modern/emerging scalable computing platforms and languages (e.g. Spark) Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks for handling large datasets. Our Fantastic Benefits We offer company-wide bonuses based on monthly sales targets, employee referral bonuses, adoption assistance, tuition reimbursement, certification reimbursement, certification completion bonuses, and a relaxed dress code - all in a modern, high-tech, and fun work environment. For more details about our benefits in each office location, please visit www.knowbe4.com/careers/benefits. Note: An applicant assessment and background check may be part of your hiring procedure. Individuals seeking employment at KnowBe4 are considered without prejudice to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation or any other characteristic protected under applicable federal, state, or local law. If you require reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please visit www.knowbe4.com/careers/request-accommodation. No recruitment agencies, please.
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Data Scientist Are you ready to embark on an exciting journey of discovery, insights and innovation in the data science space? You’ll be working at the intersection of analytics and engineering to uncover valuable insights hidden within our data With an abundance of data at your fingertips along with cutting-edge capabilities to test and learn, you can expect to develop your technical expertise and make a meaningful impact through your work If you have the desire and tenacity to embark on a steep learning curve in an exciting and rapidly evolving field, apply now We're offering this role at associate vice president level What you'll do As a Data Scientist, you’ll combine human insight and perspective with modern data science tools to gain an unparalleled understanding of our customers, and far more powerful predictive targeting and modelling. Day To Day, You’ll Be Working closely with your stakeholders to understand their needs, define pain points, and identify opportunities to solve them through advanced analytics Developing hypotheses and articulating what data needs to be collected, which analyses to run, and the approach that will deliver the most value Using predictive analytics and artificial intelligence (AI) to extract insights from big data, including machine learning (ML) models, natural language processing (NLP), and deep learning (DL) Building, training and monitoring ML models and writing programs that automate data collection, data processing, model building and model deployment Bringing solutions and opportunities to life through Clearly conveying the meaning of results, bringing them to life through impactful data visualisation and storytelling Providing actionable insights to decision-makers and stakeholders at every level of technical understanding The skills you'll need If you have an aptitude for advanced mathematics and statistics and you're curious about the evolving role of data in shaping the next generation of financial services, this could be the job for you. You’ll Demonstrate Strong programming skills in Python and SQL, with a solid foundation in statistics, machine learning, data preprocessing, and data visualization using tools like Matplotlib, Seaborn, and Plotly Hands-on experience with large language models like OpenAI, Hugging Face, prompt engineering, and familiarity with agentic frameworks such as LangChain or AutoGen Skilled in deploying models and working with cloud platforms like AWS, GCP, or Azure, with strong analytical thinking and the ability to communicate insights effectively Knowledge of statistical modelling and ML techniques Proficiency in Tableau or another data visualisation tool
Posted 3 weeks ago
2.0 years
0 Lacs
Delhi, India
On-site
This role is for one of Weekday's clients Min Experience: 2 years Location: gurugram, NCR, Delhi, NOIDA, Uttar Pradesh JobType: full-time Requirements About the Role: We are seeking a passionate and skilled AI/ML Engineer with 2+ years of experience to join our growing technology team. The ideal candidate will have a strong foundation in machine learning, data processing, and model deployment. You will work on developing and implementing cutting-edge AI and ML models to solve real-world problems and contribute to the development of intelligent systems across our product suite. You'll collaborate with cross-functional teams including data scientists, software engineers, and product managers to build scalable and robust ML-powered applications. Key Responsibilities: 🔹 Model Development & Deployment Design, build, and train machine learning models to support core product features. Experiment with supervised, unsupervised, and deep learning algorithms to solve business challenges. Deploy models into production environments and monitor their performance. 🔹 Data Handling & Feature Engineering Collect, clean, preprocess, and analyze large volumes of structured and unstructured data. Engineer features that enhance model performance and align with product requirements. Ensure data quality and consistency across the pipeline. 🔹 Model Optimization Evaluate model performance using relevant metrics (precision, recall, F1-score, ROC-AUC, etc.). Optimize models for speed and accuracy using hyperparameter tuning, ensemble methods, or transfer learning. 🔹 Collaboration & Integration Collaborate with backend and frontend teams to integrate AI/ML capabilities into products. Build APIs and services that expose ML models for consumption across applications. Work closely with data engineers and DevOps to deploy and scale ML pipelines. 🔹 Research & Innovation Stay updated on the latest trends, tools, and frameworks in AI/ML. Prototype and test innovative AI solutions, including NLP, computer vision, recommendation systems, or time series forecasting. Document methodologies, findings, and technical processes clearly. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or related field. 2+ years of hands-on experience in AI/ML model development and deployment. Strong knowledge of Python and ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, or similar. Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Understanding of ML lifecycle, including data preprocessing, model building, evaluation, and deployment. Familiarity with cloud platforms (AWS, GCP, Azure) and container technologies like Docker is a plus. Knowledge of REST APIs and integration of ML models with applications. Excellent problem-solving and analytical skills. Strong communication skills and ability to explain complex ML concepts to non-technical stakeholders. Preferred (Nice to Have): Experience with Natural Language Processing (NLP), Computer Vision, or Reinforcement Learning. Exposure to MLOps, model monitoring, or CI/CD pipelines. Familiarity with data versioning tools like DVC, MLflow, or Kubeflow
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
Location: Remote Type: Full-Time Experience: 3–5 Years Compensation: Equity + Fixed About Us We are an early-stage startup on a mission to revolutionize the way Architects think and curate plans using cutting-edge AI. Backed by alums from top IITs/IIMs, we're building a world-class product that leverages image processing, computer vision, and generative AI to solve real-world problems at scale. We're looking for a Founding Tech Engineer to join us at ground zero—someone who’s excited to build, iterate, and grow with us. What You'll Do Own end-to-end development of core AI/ML pipelines focused on image understanding and generation. Architect and implement scalable systems for image data ingestion, preprocessing, and model inference. Develop and fine-tune computer vision and generative models (e.g., GANs, diffusion models). Collaborate closely with founders to shape product direction and technical roadmap. Drive key technical decisions related to infrastructure, ML model deployment, and data pipelines. Rapidly prototype and test new ideas in the generative AI space. What We’re Looking For Must-Haves 3–5 years of experience in AI/ML, with strong focus on computer vision and image-based applications. Hands-on experience with generative AI techniques (GANs, VAEs, Diffusion Models, etc.). Proficient in Python and ML frameworks like PyTorch or TensorFlow. Strong grasp of image processing techniques (segmentation, detection, enhancement, etc.). Experience in training and deploying models in production. Ability to work independently and take ownership in a fast-paced startup environment. Nice-to-Haves Familiarity with multi-modal models or foundational models (e.g., CLIP, DALL·E, Stable Diffusion). Exposure to cloud infrastructure (AWS/GCP) and MLOps practices. Experience with edge deployment or mobile computer vision pipelines. Contributions to open-source AI projects or publications in relevant fields. What You’ll Get Founding-level equity and real ownership in a high-potential startup. Direct influence over product, tech stack, and team culture. Opportunity to build novel AI-driven solutions from the ground up. A collaborative, high-learning environment.
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
This role is for one of Weekday's clients Min Experience: 2 years Location: gurugram, NCR, Delhi, NOIDA, Uttar Pradesh JobType: full-time Requirements About the Role: We are seeking a passionate and skilled AI/ML Engineer with 2+ years of experience to join our growing technology team. The ideal candidate will have a strong foundation in machine learning, data processing, and model deployment. You will work on developing and implementing cutting-edge AI and ML models to solve real-world problems and contribute to the development of intelligent systems across our product suite. You'll collaborate with cross-functional teams including data scientists, software engineers, and product managers to build scalable and robust ML-powered applications. Key Responsibilities: 🔹 Model Development & Deployment Design, build, and train machine learning models to support core product features. Experiment with supervised, unsupervised, and deep learning algorithms to solve business challenges. Deploy models into production environments and monitor their performance. 🔹 Data Handling & Feature Engineering Collect, clean, preprocess, and analyze large volumes of structured and unstructured data. Engineer features that enhance model performance and align with product requirements. Ensure data quality and consistency across the pipeline. 🔹 Model Optimization Evaluate model performance using relevant metrics (precision, recall, F1-score, ROC-AUC, etc.). Optimize models for speed and accuracy using hyperparameter tuning, ensemble methods, or transfer learning. 🔹 Collaboration & Integration Collaborate with backend and frontend teams to integrate AI/ML capabilities into products. Build APIs and services that expose ML models for consumption across applications. Work closely with data engineers and DevOps to deploy and scale ML pipelines. 🔹 Research & Innovation Stay updated on the latest trends, tools, and frameworks in AI/ML. Prototype and test innovative AI solutions, including NLP, computer vision, recommendation systems, or time series forecasting. Document methodologies, findings, and technical processes clearly. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or related field. 2+ years of hands-on experience in AI/ML model development and deployment. Strong knowledge of Python and ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, or similar. Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Understanding of ML lifecycle, including data preprocessing, model building, evaluation, and deployment. Familiarity with cloud platforms (AWS, GCP, Azure) and container technologies like Docker is a plus. Knowledge of REST APIs and integration of ML models with applications. Excellent problem-solving and analytical skills. Strong communication skills and ability to explain complex ML concepts to non-technical stakeholders. Preferred (Nice to Have): Experience with Natural Language Processing (NLP), Computer Vision, or Reinforcement Learning. Exposure to MLOps, model monitoring, or CI/CD pipelines. Familiarity with data versioning tools like DVC, MLflow, or Kubeflow
Posted 3 weeks ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
This role is for one of Weekday's clients Min Experience: 2 years Location: gurugram, NCR, Delhi, NOIDA, Uttar Pradesh JobType: full-time Requirements About the Role: We are seeking a passionate and skilled AI/ML Engineer with 2+ years of experience to join our growing technology team. The ideal candidate will have a strong foundation in machine learning, data processing, and model deployment. You will work on developing and implementing cutting-edge AI and ML models to solve real-world problems and contribute to the development of intelligent systems across our product suite. You'll collaborate with cross-functional teams including data scientists, software engineers, and product managers to build scalable and robust ML-powered applications. Key Responsibilities: 🔹 Model Development & Deployment Design, build, and train machine learning models to support core product features. Experiment with supervised, unsupervised, and deep learning algorithms to solve business challenges. Deploy models into production environments and monitor their performance. 🔹 Data Handling & Feature Engineering Collect, clean, preprocess, and analyze large volumes of structured and unstructured data. Engineer features that enhance model performance and align with product requirements. Ensure data quality and consistency across the pipeline. 🔹 Model Optimization Evaluate model performance using relevant metrics (precision, recall, F1-score, ROC-AUC, etc.). Optimize models for speed and accuracy using hyperparameter tuning, ensemble methods, or transfer learning. 🔹 Collaboration & Integration Collaborate with backend and frontend teams to integrate AI/ML capabilities into products. Build APIs and services that expose ML models for consumption across applications. Work closely with data engineers and DevOps to deploy and scale ML pipelines. 🔹 Research & Innovation Stay updated on the latest trends, tools, and frameworks in AI/ML. Prototype and test innovative AI solutions, including NLP, computer vision, recommendation systems, or time series forecasting. Document methodologies, findings, and technical processes clearly. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or related field. 2+ years of hands-on experience in AI/ML model development and deployment. Strong knowledge of Python and ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, or similar. Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Understanding of ML lifecycle, including data preprocessing, model building, evaluation, and deployment. Familiarity with cloud platforms (AWS, GCP, Azure) and container technologies like Docker is a plus. Knowledge of REST APIs and integration of ML models with applications. Excellent problem-solving and analytical skills. Strong communication skills and ability to explain complex ML concepts to non-technical stakeholders. Preferred (Nice to Have): Experience with Natural Language Processing (NLP), Computer Vision, or Reinforcement Learning. Exposure to MLOps, model monitoring, or CI/CD pipelines. Familiarity with data versioning tools like DVC, MLflow, or Kubeflow
Posted 3 weeks ago
0 years
8 - 16 Lacs
Saket
On-site
We're looking for a hands-on Computer Vision Engineer who thrives in fast-moving environments and loves building real-world, production-grade AI systems. If you enjoy working with video, visual data, cutting-edge ML models, and solving high-impact problems, we want to talk to you. This role sits at the intersection of deep learning, computer vision, and edge AI, building scalable models and intelligent systems that power our next-generation sports tech platform. Requirements: Strong command of Python and familiarity with C/C++ Experience with one or more deep learning frameworks: PyTorch, TensorFlow, Keras. Solid foundation in YOLO, Transformers, or OpenCV for real-time visual AI. Understanding of data preprocessing, feature engineering, and model evaluation using NumPy, Pandas, etc. Good grasp of computer vision, convolutional neural networks (CNNs), and object detection techniques. Exposure to video streaming workflows (e. g., GStreamer, FFmpeg, RTSP). Ability to write clean, modular, and efficient code. Experience deploying models in production, especially on GPU/edge devices. Interest in reinforcement learning, sports analytics, or real-time systems An undergraduate degree (Master's or PhD preferred) in Computer Science, Artificial Intelligence, or a related discipline is preferred. A strong academic background is a plus. Responsibilities: Design, train, and optimize deep learning models for real-time object detection, tracking, and video understanding. Implement and deploy AI models using frameworks like PyTorch, TensorFlow/Keras, and Transformers. Work with video and image datasets using OpenCV, YOLO, NumPy, Pandas, and visualization tools like Matplotlib. Collaborate with data engineers and edge teams to deploy models on real-time streaming pipelines. Optimize inference performance for edge devices (Jetson, T4 etc. ) and handle video ingestion workflows. Prototype new ideas rapidly, conduct A/B tests, and validate improvements in real-world scenarios. Document processes, communicate findings clearly, and contribute to our growing AI knowledge base. Job Type: Full-time Pay: ₹800,000.00 - ₹1,600,000.00 per year Schedule: Day shift Ability to commute/relocate: Saket, Delhi, Delhi: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Work Location: In person
Posted 3 weeks ago
3.0 - 4.0 years
7 - 10 Lacs
India
On-site
Job Description: We are looking for a passionate and skilled AI Developer with 3–4 years of hands-on experience to join our dynamic team. The ideal candidate must have a strong foundation in Python and a proven track record of developing and deploying AI/ML solutions. You will be responsible for designing intelligent systems, training models, and collaborating with cross-functional teams to implement AI-driven features in our products. Key Responsibilities: Design, develop, and deploy machine learning and deep learning models. Collaborate with data scientists and software engineers to build AI-powered applications. Perform data wrangling, preprocessing, and feature engineering on large datasets. Evaluate model performance using appropriate metrics and optimize accordingly. Integrate AI models into production using APIs or ML frameworks. Research and implement the latest AI technologies and best practices. Maintain and improve existing AI systems for performance and scalability. Document solutions and write clean, maintainable code. Required Skills & Qualifications: 3–4 years of experience in AI/ML development. Strong proficiency in Python and its AI/ML libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Solid understanding of machine learning algorithms, data structures, and OOP principles. Experience with NLP, computer vision, or generative AI is a plus. Familiarity with model deployment frameworks (e.g., Flask, FastAPI, Docker). Experience with version control systems like Git. Good knowledge of databases (SQL/NoSQL) and data pipelines. Excellent problem-solving skills and attention to detail. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related field. Experience with cloud platforms like AWS, Azure, or GCP. Understanding of MLOps concepts and tools (e.g., MLflow, Kubeflow). Exposure to agile development environments. Job Type: Full-time Pay: ₹65,000.00 - ₹85,000.00 per month Benefits: Flexible schedule Location Type: In-person Schedule: Day shift Work Location: In person
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough