Jobs
Interviews

1992 Preprocessing Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

ABOUT US: The vision from the start has been to create a state-of-the-art infrastructure of the workplace with the implementation of all the tools for employees and clients makes Bytes Technolab a growth hacker. This has really helped the dev team in adapting to the existing & upcoming technologies & platforms to create top-notch software solutions for businesses, startups, and enterprises. Our core value lies with 100% integrity in communication, workflow, methodology, and flexible collaboration. With the client-first approach, we are offering flexible models of engagement that can help our clients in the best way possible. Bytes Technolab is confident that this approach would help us develop user-centric, applicable, advanced, secure, and scalable software solutions. Our team is fully committed to adding value at every stage of your journey with us, from initial engagement to delivery and beyond. Role Description: 3+ years of professional experience in Machine Learning and Artificial Intelligence. Strong proficiency in Python programming and its libraries for ML and AI (NumPy, Pandas, scikit-learn, etc.). Hands-on experience with ML/AI frameworks like PyTorch, TensorFlow, Keras, Facenet, OpenCV, and other relevant libraries. Proven ability to work with GPU acceleration for deep learning model development and optimization (using CUDA, cuDNN). Strong understanding of neural networks, computer vision, and other AI technologies. Solid experience working with Large Language Models (LLMs) such as GPT, BERT, LLaMA, including fine-tuning, prompt engineering, and embedding-based retrieval (RAG). Working knowledge of Agentic Architectures, including designing and implementing LLM-powered agents with planning, memory, and tool-use capabilities. Familiar with frameworks like LangChain, AutoGPT, BabyAGI, and custom agent orchestration pipelines. Solid problem-solving skills and the ability to translate business requirements into ML/AI/LLM solutions. Experience in deploying ML/AI models on cloud platforms (AWS SageMaker, Azure ML, Google AI Platform). Proficiency in building and managing ETL pipelines, data preprocessing, and feature engineering. Experience with MLOps tools and frameworks such as MLflow, Kubeflow, or TensorFlow Extended (TFX). Expertise in optimizing ML/AI models for performance and scalability across diverse hardware architectures. Experience with Natural Language Processing (NLP) and foundational knowledge of Reinforcement Learning. Familiarity with data versioning tools like DVC or Delta Lake. Skilled in containerization and orchestration tools such as Docker and Kubernetes for scalable deployments. Proficient in model evaluation, A/B testing, and establishing continuous training pipelines. Experience working in Agile/Scrum environments with cross-functional teams. Strong understanding of ethical AI principles, model fairness, and bias mitigation techniques. Familiarity with CI/CD pipelines for machine learning workflows. Ability to effectively communicate complex ML, AI, and LLM/Agentic concepts to both technical and non-technical stakeholders. We are hiring professionals with 3+ years of experience in IT Services. Kindly share your updated CV at freny.darji@bytestechnolab.com

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About The Position We are looking for a dynamic and experienced Data Scientist to join our growing AI & Data Science team. In this role, you will lead projects and collaborate with cross-functional teams to build and deploy cutting-edge machine learning solutions that drive business value. Technical and Professional Requirements Minimum 5+ years of experience in data science, with at least 2 years in a leadership or team management role. Proficiency in Natural Language Processing (NLP) techniques such as LDA, embeddings, Retrieval-Augmented Generation (RAG). Experience in time series forecasting, statistical modeling, and predictive analytics. Hands-on experience with Databricks, Azure ML stack, Python, and Django. Solid understanding of end-to-end data science workflows, from data engineering to model deployment. Ability to collaborate across diverse teams and communicate technical concepts clearly to stakeholders. Job Responsibilities Build and deploy end-to-end ML pipelines, including data preprocessing, model training, and production deployment. Implement supervised, unsupervised, and deep learning models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Collaborate with data scientists and engineers to integrate ML models into production systems. Use MLOps tools like MLflow or Kubeflow for CI/CD and model lifecycle management. Monitor and optimize deployed models for performance and scalability. Stay updated on ML advancements and contribute to innovation within the team. Educational Requirements: Minimum 60% in any two of the following: Secondary Higher Secondary Graduation.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Location: Bengaluru Experience: 2 - 4 yrs Technologies / Skills : Python, machine learning, NumPy, Pandas, Scikit-Learn, PySpark, TensorFlow or PyTorch, About the Role We are looking for an enthusiastic Data Scientist to join our team based in Bangalore. You will be pivotal in developing, deploying, and optimizing recommendation models that significantly enhance user experience and engagement. Your work will directly influence how customers interact with products, driving personalization and conversion. Responsibilities Model Development: Design, build, and fine-tune machine learning models focused on personalized recommendations to boost user engagement and retention. Data Analysis: Perform comprehensive analysis of user behavior, interactions, and purchasing patterns to generate actionable insights. Algorithm Optimization: Continuously improve recommendation algorithms by experimenting with new techniques and leveraging state-of-the-art methodologies. Deployment & Monitoring: Deploy machine learning models into production environments, and develop tools for continuous performance monitoring and optimization. Education level : Bachelor’s degree (B.E. / B. Tech) in Computer Science or equivalent from a reputed institute. Technical Expertise Strong foundation in Statistics, Probability, and core Machine Learning concepts. Hands-on experience developing recommendation algorithms, including collaborative filtering, content-based filtering, matrix factorization, or deep learning approaches. Proficiency in Python and associated libraries (NumPy, Pandas, Scikit-Learn, PySpark). Experience with TensorFlow or PyTorch frameworks and familiarity with recommendation system libraries (e.g., torch-rec). Solid understanding of Big Data technologies and tools (Hadoop, Spark, SQL). Familiarity with the full Data Science lifecycle from data collection and preprocessing to model deployment. About Oneture Technologies Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. Oneture team delivers full lifecycle solutions—from ideation, project inception, planning through deployment to ongoing support and maintenance. Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Our Team as an AI Engineer! Are you a tech enthusiast with a passion for innovation? Do you excel at designing and developing state-of-the-art AI digital solutions within the Microsoft ecosystem including Dynamics 365 ERP, Azure, and Copilot AI ? If so, we want you to join us in our mission to elevate our digital solutions to the next level. About Us At STAEDEAN , we are motivated by a simple yet impactful mission: to empower our customers by solving complex business challenges with seamless digital solutions. Trusted by over 2,000 customers worldwide, we are an enthusiastic and tech-savvy team dedicated to driving innovation at every step. We do not just offer jobs, we offer opportunities to gain experience, make a meaningful impact, and be part of something extraordinary. Join us and help shape the future of digital solutions while taking your architecture skills to new heights. Why Work for Us? Join a team where innovation thrives and every voice counts. At STAEDEAN , we foster a dynamic environment that prioritizes well-being, collaboration, and career growth. With a hybrid workplace, mental health support, and diverse international teams, you will find the perfect balance of creativity and support. Your Role As an AI/ML Engineer, you will be a core contributor to the implementation of intelligent, production-ready solutions that integrate seamlessly with Microsoft platforms. Working closely with our AI Architect, Data Scientist, and Product Owners, you will bring AI concepts to life—building robust pipelines, interfaces, and integrations for ERP and business applications powered by large language models (LLMs), Azure AI, and Copilot. Your responsibilities will include: Implement intelligent AI-driven solutions, including LLM-powered agents, chat interfaces, and decision-support tools. Integrate AI capabilities with Microsoft platforms including Azure AI, Azure ML, Power Platform, and Dataverse. Enhance Microsoft Dynamics 365 ERP (Finance & Supply Chain) with embedded AI features and Copilot experiences. Build scalable, modular data pipelines on Azure using e.g. Data Factory, Synapse Analytics, and other Microsoft integration tools. Design and maintain reusable AI components (e.g., prompt templates, embeddings, RAG pipelines) Automate data collection, preprocessing, evaluation, and retraining workflows. Assist with monitoring, evaluation, and optimization of AI models in production environments Write clean, maintainable code and contribute to shared AI engineering infrastructure. Collaborate cross-functionally to deliver AI functionality as part of larger product solutions What You Need To Succeed Proven experience in AI/ML applications, ideally in enterprise or ERP settings Hands-on experience with Azure AI services, Copilot, and ERP systems, preferably Microsoft Dynamics 365 or similar platforms Familiarity with Power Platform, Power BI, and Dataverse Strong Python skills for backend logic, data processing, and model orchestration Experience building modular pipelines, APIs, and workflows in cloud environments Understanding of prompt engineering, RAG (Retrieval-Augmented Generation) and fine-tuning, and LLM evaluation best practices Ability to work independently and take ownership of projects while meeting deadlines Strong collaboration and communication skills—you can align with architects, developers, and business stakeholders Bonus: Experience with MLOps, DevOps, CI/CD, and monitoring tools Why You Should Apply Be Part of a Dynamic Community: Our supportive and vibrant environment ensures your contributions truly matter. You'll work with passionate professionals who are dedicated to making a difference. Drive Innovation and Excellence: As a STAEDEAN, you’ll be at the forefront of innovation, developing solutions that transform industries and drive sustainable impact. Grow and Thrive: We are committed to fostering a culture of continuous improvement and shared success. Whether you're an experienced professional or just starting your career, you'll find ample opportunities to develop your skills, take on new challenges, and grow. Make a Meaningful Impact: Your work at STAEDEAN will have a real impact on our customers, partners, and the world. Together, we strive to achieve extraordinary things, pushing the boundaries to create a better future. If you are ready to take on exciting challenges in a fast-paced, innovative environment, STAEDEAN is the place for you. Together, we will shape the future of technology and revolutionize business transformation. Join us, make an impact, and become part of a forward-thinking team.

Posted 1 month ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role - AI ML Engineer (Entry-Level) Experience- 0-1 yrs (in Data Science/AI/ML) Location- Chennai / Hyderabad Skills required - Python, AI/ML Models, Algorithms Mode of work- WFO • Responsibilities: o Design, develop, and implement AI/ML models and algorithms. o Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions. o Write clean, efficient, and well-documented code. o Collaborate with data engineers to ensure data quality and availability for model training and evaluation. o Work closely with senior team members to understand project requirements and contribute to technical solutions. o Troubleshoot and debug AI/ML models and applications. o Stay up to date with the latest advancements in AI/ML. o Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models. o Develop and deploy AI solutions on Google Cloud Platform (GCP). o Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy. • Qualifications: o bachelor's degree in computer science, Artificial Intelligence, or a related field. o 1+ years of experience in developing and implementing AI/ML models. o Strong programming skills in Python. o Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn. o good understanding of machine learning concepts and techniques. o Ability to work independently and as part of a team. o Strong problem-solving skills. o good communication skills. o Experience with Google Cloud Platform (GCP) is preferred. o Familiarity with Vertex AI is a plus.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

As a Generative AI Engineer, you will be responsible for designing, developing, and deploying generative AI models to solve complex business problems. You will work closely with data scientists, software engineers, and domain experts to build AI-driven applications that leverage LLMs, diffusion models, and other state-of-the-art generative AI techniques. Key Responsibilities Research, design, and develop generative AI models, including Large Language Models (LLMs), GANs, VAEs, and diffusion models. Fine-tune and optimize pre-trained models to meet business requirements. Collaborate with cross-functional teams to integrate AI models into production-ready applications. Develop scalable and efficient pipelines for data preprocessing, model training, and inference. Stay updated with the latest advancements in AI/ML and contribute to research initiatives. Ensure AI solutions are explainable, ethical, and comply with regulatory standards. Optimize model performance in terms of accuracy, latency, and cost-effectiveness. Implement and maintain MLOps best practices for continuous deployment and monitoring. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. 3+ years of experience in AI/ML development, with at least 1+ year of hands-on experience in generative AI. Strong proficiency in Python and AI/ML frameworks such as TensorFlow, PyTorch, and Hugging Face. Exposure working with transformer-based architectures like GPT, BERT, and diffusion models. Solid understanding of deep learning, NLP, and computer vision techniques. Knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Familiarity with model deployment, APIs, and MLOps tools for CI/CD automation. Strong problem-solving skills and ability to work in an agile, collaborative environment. Preferred Qualifications Experience in fine-tuning and deploying LLMs such as GPT-4, LLaMA, or PaLM. Hands-on experience with reinforcement learning and prompt engineering. Understanding of Responsible AI principles and bias mitigation techniques.

Posted 1 month ago

Apply

0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Job Requirements Build , train, and implement Python machine learning models for diverse applications by utilizing Python's vast libraries and frameworks, including TensorFlow, PyTorch, and scikit-learn, to develop strong and effective AI solutions. Work Experience Proficiency in python and various ML libraries like TensorFlow, Pytorch, Numpy, Pandas etc. Knowledge of Object-Oriented Analysis and Design, Software Design Patterns and Java coding principles Good understanding of ML and deep learning techniques and algorithms. Knowledge on MLOps, DevOps and Cloud Platforms like AWS SageMaker would be good to have. Experience with Elasticsearch for efficient data indexing, search, and retrieval. Data Handling techniques: Cleaning and preprocessing, knowledge of databases and DB integration is good to have

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Role: AI Engineer (WFO - Chennai) Experience: 3years Job Type: Full Time, Permanent Job Location: Chennai (5days - work from office) Notice Period: Immediate to 15days We are looking for a passionate AI Engineer with 1–3 years of hands-on experience in developing and deploying AI/ML models. You will contribute to designing scalable AI systems, training models, and supporting real-world deployment for autonomous vehicles and other AI-centric applications. If you're enthusiastic about applying your AI skills in impactful projects, join our innovative team. Roles & Responsibilities: Build and optimize AI/ML models for real-time autonomous driving systems. Work closely with senior architects and product teams to translate business problems into AI solutions. Implement and test deep learning models (e.g., CNNs, transformers) for tasks such as object detection, lane recognition, or natural language understanding. Integrate AI models into production environments and contribute to continuous model improvement. Collaborate with MLOps teams to ensure reliable deployment through CI/CD pipelines. Write clean, well-documented, and efficient code. Stay up-to-date with the latest research and best practices in AI and Machine Learning. Requirements: Required Technical and Professional Expertis e 1–3 years of experience in AI/ML development and deployment. Proficiency in Python and common ML frameworks (TensorFlow, PyTorch). Experience with data preprocessing, model training, evaluation, and optimization. Solid foundation in machine learning and deep learning algorithms. Experience working with APIs, model versioning, and deployment workflows. Familiarity with computer vision, NLP, or reinforcement learning. Hands-on experience with Docker, Git, and cloud services (AWS, GCP, or Azure). Strong problem-solving and analytical skills. Good communication skills and ability to work in a collaborative environment. Preferred Skills: Experience with real-time inference and edge deployment. Familiarity with autonomous systems or robotics applications. Exposure to LLMs (e.g., GPT, BERT) or multimodal AI models. Experience with RESTful APIs, microservices, and distributed systems. Educational Qualifications: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Benefits Innovative Projects: Work on cutting-edge AI technologies shaping the future of mobility. Collaborative Culture: Join a passionate team pushing boundaries in AI and autonomy. Career Growth: Be part of a fast-growing startup with plenty of growth opportunities. Benefits: Competitive salary, health insurance (up to ₹20 lakhs), wellness programs, learning & development, mentorship, and performance-based increments.

Posted 1 month ago

Apply

0.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Location: Bangalore - Karnataka, India - EOIZ Industrial Area Job Family: Engineering Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T3(B) Job ID: R-44637-2025 Description & Requirements Job Description Introduction: A Career at HARMAN Digital Transformation Solutions (DTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Java Microservices Java Developer with experience in microservices deployment, automation, and system lifecycle management(security, and infrastructure management) Required Skills: Java, hibernate, SAML/OpenSAML REST APIs Docker PostgreSQL (PSQL) Familiar with git hub workflow. Good to Have: Go (for automation and bootstrapping) RAFT Consensus Algorithm HashiCorp Vault Key Responsibilities: Service Configuration & Automation: Configure and bootstrap services using the Go CLI. Develop and maintain Go workflow templates for automating Java-based microservices. Deployment & Upgrade Management: Manage service upgrade workflows and apply Docker-based patches. Implement and manage OS-level patches as part of the system lifecycle. Enable controlled deployments and rollbacks to minimize downtime. Network & Security Configuration: Configure and update FQDN, proxy settings, and SSL/TLS certificates. Set up and manage syslog servers for logging and monitoring. Manage appliance users, including root and SSH users, ensuring security compliance. Scalability & Performance Optimization: Implement scale-up and scale-down mechanisms for resource optimization. Ensure high availability and performance through efficient resource management. Lifecycle & Workflow Automation: Develop automated workflows to support service deployment, patching, and rollback. Ensure end-to-end lifecycle management of services and infrastructure. What You Will Do To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What You Need to Be Successful Utilized various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Bonus Points if You Have Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What Makes You Eligible Bachelor’s or master’s degree in computer science, Artificial Intelligence, or a related field. 5-10 years relevant and Proven experience in developing and deploying generative AI models and agents in a professional setting. What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today ! HARMAN is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or Protected Veterans status. HARMAN offers a great work environment, challeng Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Role Overview As a Senior Data Solutions Architect in the Business Analytics, Automation & AI team, you will be responsible for architecting and delivering comprehensive, end-to-end data solutions across cloud and on-premises platforms in Business Intelligence and Artificial Intelligence domains. Your focus will include leading strategic data migration automation initiatives that optimize and automate the transfer of ERP, CRM, and other enterprise data to modern data platforms, ensuring data cleansing and high-quality, reliable datasets. This hands-on role also involves establishing and managing a small, high-performing team of data engineers and analysts that thrives on streamlined processes and rapid innovation. Leveraging an IT consulting mindset, experience with global enterprises and complex data ecosystems, you will inspire and nurture technical talent, driving a culture of continuous learning and development. As a leader, you will foster ambition and accountability through goal-oriented frameworks and actively contribute to transformative organizational initiatives that push beyond business as usual, pioneering digitization and data-driven transformation within the company. Key Responsibilities Architect and deliver end-to-end data solutions across cloud and on-premises platforms, including AWS, Azure, Informatica, etc. Lead strategic data migration automation initiatives, optimizing and automating the movement of ERP, CRM, and other enterprise data to modern data platforms. Drive business intelligence transformation, ensuring robust data models, efficient ETL pipelines, and scalable analytics architectures for Enterprise BI needs. Build and manage AI data architectures that support AI workflows, including handling unstructured and semi-structured data, real-time data streams, and large-scale datasets for model training and inference. Implement advanced data preprocessing steps such as data cleaning, normalization, encoding categorical variables, feature engineering, and data enrichment to prepare data optimally for AI models. Manage and mentor a team of 10 data engineers and analysts, fostering skill development in BI and AI data technologies. Collaborate with business/function stakeholders to align data architecture with business goals, ensuring solutions meet both technical and operational requirements. Establish and enforce data governance, data quality, and data security frameworks, using tools like Collibra or similar. Participate in strategic project engagements, leveraging consulting expertise to define and propose best-fit solutions. Ensure compliance with regulatory and security standards, implementing access controls, encryption, and audit mechanisms. Required Skills & Qualifications Technical Expertise: Deep hands-on experience with Informatica, AWS ( including S3, Redshift )/Azure, Databricks, and Big Data platforms. Strong proficiency in Python, SQL, and NoSQL for building scalable ETL/data pipelines and managing structured/unstructured data. Experience with data governance tools (e.g., Collibra), data modeling, and data warehouse design. Knowledge of Tableau/PowerBI/Alteryx is a must. Knowledge of ERP, CRM data structures, and integration patterns. Familiarity with AI/ML frameworks like TensorFlow, PyTorch, and LLM orchestration tools (e.g., LangChain, LlamaIndex ) to support AI model workflows. Proven skills in building modular, scalable, and automated ETL/AI pipelines with robust data quality and security controls. Certifications: Certified Solutions Architect from AWS/Microsoft (Azure)/Google Cloud. Additional certifications in Databricks, or Informatica are a plus. Consulting Experience: Proven track record in an IT consulting environment, engaging with large enterprises and MNCs in strategic data solutioning projects. Strong stakeholder management, business needs assessment, and change management skills. Leadership & Soft Skills: Experience managing and mentoring small teams, developing technical skills in BI and AI data domains. Ability to influence and align cross-functional teams and stakeholders. Excellent communication, documentation, and presentation skills. Strong problem-solving, analytical thinking, and strategic vision. Preferred Experience Leading large-scale data migration and transformation programs for ERP/CRM systems. Implementing data governance and security policies across multi-cloud environments. Working with global clients in regulated industries. Driving adoption of modern data platforms and BI/AI/automation solutions in enterprise settings. Certifications AWS Certified Solutions Architect – Professional/ Microsoft Certified: Azure Solutions Architect Expert AWS Certified Data Engineer – Professional/Databricks Certified Data Engineer Professional Educational Qualifications: Master’s/bachelor’s degree in engineering or Master of Computer Applications is required. A Masters in Business Administration (MBA) is a plus. Primary Location : IN-Karnataka-Bangalore Schedule : Full-time Unposting Date : Ongoing

Posted 1 month ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Overview We're looking for a Python-based AI/ML Developer who brings solid hands-on experience in building machine learning models and deploying them into scalable, production-ready APIs using FastAPI or Django. The ideal candidate is both analytical and implementation-savvy, capable of transforming models into live services and integrating them with real-world systems. Key Responsibilities Design, train, and evaluate machine learning models (classification, regression, clustering, etc.) Build and deploy scalable REST APIs for model serving using FastAPI or Django Collaborate with data scientists, backend developers, and DevOps to integrate models into production systems Develop clean, modular, and optimized Python code using best practices Perform data preprocessing, feature engineering, and data visualization using Pandas, NumPy, Matplotlib, and Seaborn Implement model serialization techniques (Pickle, Joblib, ONNX) and deploy models using containers (Docker) Manage API security with JWT and OAuth mechanisms Participate in Agile development with code reviews, Git workflows, CI/CD pipelines Must-Have Skills Python & Development : Proficient in Python 3.x, OOP, and clean code principles Experience with Git, Docker, debugging, unit testing AI/ML Good grasp of supervised/unsupervised learning, model evaluation, and data wrangling Hands-on with Scikit-learn, XGBoost, LightGBM Web Frameworks FastAPI : API routes, async programming, Pydantic, JWT Django : REST Framework, ORM, Admin panel, Middleware DevOps & Cloud Experience with containerized deployment using Docker Exposure to cloud platforms: AWS, Azure, or GCP CI/CD with GitHub Actions, Jenkins, or GitLab CI Databases SQL : PostgreSQL, MySQL NoSQL : MongoDB, Redis ORM : Django ORM, Skills : Model tracking/versioning tools (MLflow, DVC) Knowledge of LLMs, transformers, vector DBs (Pinecone, Faiss) Airflow, Prefect, or other workflow automation tools Basic frontend skills (HTML, JavaScript, React) Requirements Education: B.E./B.Tech or M.E./M.Tech in Computer Science, Data Science, or related fields Experience: 3-6 years of industry experience in ML development and backend API integration Strong communication skills and ability to work with cross-functional teams (ref:hirist.tech)

Posted 1 month ago

Apply

3.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

Remote

Job Title : Backend Developer Python & AI Image Detection Location : Remote Experience Required : 3+ Years Employment Type : Full-Time (Payroll) About The Company We are an innovative tech-driven organization focused on developing intelligent AI-powered solutions. Our mission is to build robust systems that automate complex processes through cutting-edge technologies. Join our passionate team and contribute to real-world solutions at the intersection of AI and backend development. About The Role We are seeking a skilled and motivated Backend Developer with expertise in Python and AI-based image detection. The ideal candidate will have a strong background in backend development and a passion for integrating AI models into scalable services. You will be instrumental in designing and maintaining systems that process and validate images using machine learning and computer vision technologies. Key Responsibilities Design, develop, and maintain scalable backend APIs using Python (preferably FastAPI or Flask). Integrate AI image detection models for automated scanning, analysis, and validation. Manage image upload, preprocessing, and result delivery workflows using RESTful services. Collaborate with data science and frontend teams to integrate and deploy ML models. Optimize backend performance for high-volume image processing tasks. Write clean, well-documented, and testable code following best practices. Ensure security and data validation protocols for image handling and storage. Must-Have Skills 3+ years of experience in backend development using Python. Proficiency in image processing and AI/ML libraries such as OpenCV, TensorFlow, or PyTorch. Hands-on experience with object detection techniques (e.g., YOLO, Haar cascades). Strong understanding of RESTful API design and development. Familiarity with Docker, Git, and cloud platforms (AWS/GCP/Azure). Experience with databases like MySQL, PostgreSQL, or MongoDB. Strong debugging, testing, and performance optimization skills. Nice-to-Have Skills Experience with FastAPI or Django REST Framework. Exposure to OCR and real-time image analysis. Familiarity with CI/CD pipelines and model serving tools like TorchServe or TensorFlow Serving. Prior experience developing AI-based automation or validation tools. Skill Overview Experience Proficiency Python - 2+ Years Advanced Image Processing - 2+ Years Intermediate OpenCV - 2+ Years Intermediate RESTful APIs - 2+ Years Intermediate Docker - 2+ Years Intermediate Git - 2+ Years Intermediate MySQL - 2+ Years Advanced Test Automation - Intermediate Object Detection - Intermediate PostgreSQL - Intermediate (ref:hirist.tech)

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. TempHtmlFile About KPMG In India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. >> Role and Responsiblities: Support model validation for various supervised and unsupervised AI/ML models pertaining to financial crime compliance. Validate data quality, feature engineering and preprocessing steps. Conduct robustness, sensitivity and stability testing. Evaluate model explainability using tools such as SHAP,LIME. Review model documentation, development code, and model risk assessments. Assist in developing and testing statistical and machine learning models for risk, fraud and business analytics. >> Key skills and tools: Programming: Python (must-have), R, SQL, ML libraries Tools: Jupyter, Git, MLFlow, Excel, Tableau/Power BI (for visualization), Dataiku Good technical writing and stakeholder communication Understanding of model risk governance, MRM policies, and ethical AI principles. Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you. Qualifications TempHtmlFile >> Qualification: Bachelor’s/Master’s in Computer Science, Data Science, Statistics, Applied Math, or a related quantitative discipline. 1–3 years of experience in AI/ML model validation, development, or risk analytics.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

We’re reinventing the market research industry. Let’s reinvent it together. At Numerator, we believe tomorrow’s success starts with today’s market intelligence. We empower the world’s leading brands and retailers with unmatched insights into consumer behavior and the influencers that drive it. The Data Methods Data Science Lead will be expected to independently define and lead all analyses and develop solutions to support key innovation initiatives and methodology development, The individual will have a relatively advanced understanding of our E2E processes, and be able to relatively independently work on identifying areas for improvement, assessing what enhancements should be made and working with other teams to implement these changes. There will be a strong focus on using panel data across our services working via our dedicated Data Processing Platform on Databricks. Main Duties And Responsibilities Data Methods / Data Science Primary Role: Able to drive independently key initiatives and projects related to Data Methods Project Plans: Outline project scope, objectives, timelines and milestones. Data Collection / Preparation: Management of data (e.g. cleaning, preprocessing). Exploratory Data Analysis: Conduct feasibility studies potential solutions / lead prototyping. Model Development: Building and validating models using various data science algorithms. Model Deployment: Designing requirements / testing / deploying models in production. Reporting and Documentation: Methods used, findings and recommendations. Training and Support: Training and providing deployment support. Project Review and Evaluation: Conducting reviews of project outcomes Able to form views on how new processes ought to be constructed. Co-ordinate work with On-Shore Methods Manager / Leads Able to contribute both to BAU enhancements and to work under the umbrella of a project. Understanding the wider business Develops a basic understanding of the Operations functions Develop an understanding of the Commercial usage of our data Develop a broader understanding of our direct competitors Training & Development Take ownership for self-development and where available participate in structured training. Gains proficiency in all relevant databases, data interrogation and reporting tools (for example Databricks, SQL, Python , Excel, etc.) Communication & Collaboration Ability to collaborate across key stakeholders in operations, technology and product teams Ability to collaborate and communicate with stakeholders upwards and with team members Be able to communicate in an appropriate manner (e.g. verbally, presenting or creating a PowerPoint, Word document, email) Adhering to deadlines and escalating where there is a risk of delays Demonstrate and role model best practise and techniques including positive communication style. Displays a proactive attitude when working both within and outside of the team. Demonstrates clear, direct and to the point communication at Data Methods team meetings Issue Management and Best Practice Proactive identification and root cause analysis of Data Methods issues and development of best practice solutions to improve the underlying methodology and processes. Support regular methodology review meetings with On-Shore Manager and Leads to establish priorities and future requirements. Knowledge sharing through the team, in either team meeting or day-to-day with the wider Data Methods team Able to think through complex processes and to how to construct and improve them, considering in detail the positive and negative implications of different approaches and how best to test and assess them. Resource management Organising workload efficiently Adhering to schedules Escalating any risks to deadlines and capacity challenges What You'll Bring to Numerator Education & Experience Bachelors, Masters, Doctorate Degree 3+ years experience Knowledge Domain expertise in 3-5 of: Data collection methods Fraud detection methods Data cleaning methods Demographics methods Sampling methods Bias methods Eligibility methods Weighting methods Data aggregation methods Outlier methods Statistical modelling Metrics / KPI methods Tools Python [or R] (advanced - ability to independently script end-to-end functioning programmes) Databricks (intermediate) SQL (intermediate) Azure Dev Ops (basic) Git (basic) Excel (advanced) Power BI (intermediate) Passion and Drive Passionate about data quality, integrity and best practices Passionate about delivering high quality data to clients on schedule Communication Ability to communicate complex technical concepts to stakeholders in simple language Good English communication, presentation, interpersonal and writing skills Good listening skills Good online, virtual, and in-person collaboration skills Comfortable presenting panel and data methods to external audiences (internal)

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

This role is for one of Weekday's clients Min Experience: 3 years Location: Gurugram, NCR, Delhi, Haryana, Punjab, Uttar Pradesh, Kanpur JobType: full-time Requirements About the Role: We are seeking a skilled and passionate AI Engineer with a focus on Large Language Models (LLMs) to join our advanced AI and machine learning team. As an AI Engineer, you will play a vital role in designing, developing, and deploying intelligent systems that leverage the power of LLMs to solve real-world business challenges. You will work on cutting-edge projects that utilize the latest advancements in generative AI, natural language processing, and model fine-tuning techniques. This is an exciting opportunity for professionals who thrive in a fast-paced, innovative environment and are driven by curiosity, creativity, and a desire to work on meaningful AI applications. Key Responsibilities: Design, build, and optimize AI models with a strong focus on Large Language Models (e.g., GPT, BERT, T5, LLaMA). Develop and implement RAG (Retrieval-Augmented Generation) pipelines and vector search solutions to enhance model accuracy and contextual understanding. Fine-tune and evaluate pre-trained LLMs on domain-specific datasets to meet performance and accuracy benchmarks. Build robust APIs and tools for integrating AI-powered solutions into production systems and customer-facing applications. Collaborate with data scientists, product managers, and software engineers to align AI functionality with business needs. Conduct research and stay up-to-date with the latest developments in LLMs, NLP, and generative AI. Apply techniques such as prompt engineering, few-shot learning, and instruction tuning to improve model interaction quality. Ensure data integrity, performance optimization, and model explainability in all deployed solutions. Participate in code reviews, documentation, and knowledge sharing to maintain engineering best practices. Requirements: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 3 years of hands-on experience working with AI/ML models, with at least 1-2 years focused on LLMs and NLP applications. Proficiency in Python and popular AI frameworks/libraries such as TensorFlow, PyTorch, Hugging Face Transformers, LangChain, or OpenAI APIs. Experience with vector databases like FAISS, Pinecone, or Weaviate for retrieval tasks. Strong knowledge of LLM fine-tuning, embeddings, tokenization, and transformer-based architectures. Solid understanding of data preprocessing, text generation, and evaluation metrics in NLP tasks. Familiarity with deployment tools and cloud platforms such as AWS, Azure, or GCP. Ability to work in a collaborative, agile environment and contribute to cross-functional AI initiatives. Preferred Skills: Experience working with RAG pipelines, multi-turn conversations, or enterprise search. Background in knowledge graphs, ontologies, or semantic search. Exposure to MLOps practices and automated model deployment pipelines. Publications or contributions in the field of AI/NLP are a plus.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

This role is for one of Weekday's clients Min Experience: 3 years Location: Gurugram, NCR, Delhi, Haryana, Punjab, Uttar Pradesh, Kanpur JobType: full-time Requirements About the Role: We are seeking a skilled and passionate AI Engineer with a focus on Large Language Models (LLMs) to join our advanced AI and machine learning team. As an AI Engineer, you will play a vital role in designing, developing, and deploying intelligent systems that leverage the power of LLMs to solve real-world business challenges. You will work on cutting-edge projects that utilize the latest advancements in generative AI, natural language processing, and model fine-tuning techniques. This is an exciting opportunity for professionals who thrive in a fast-paced, innovative environment and are driven by curiosity, creativity, and a desire to work on meaningful AI applications. Key Responsibilities: Design, build, and optimize AI models with a strong focus on Large Language Models (e.g., GPT, BERT, T5, LLaMA). Develop and implement RAG (Retrieval-Augmented Generation) pipelines and vector search solutions to enhance model accuracy and contextual understanding. Fine-tune and evaluate pre-trained LLMs on domain-specific datasets to meet performance and accuracy benchmarks. Build robust APIs and tools for integrating AI-powered solutions into production systems and customer-facing applications. Collaborate with data scientists, product managers, and software engineers to align AI functionality with business needs. Conduct research and stay up-to-date with the latest developments in LLMs, NLP, and generative AI. Apply techniques such as prompt engineering, few-shot learning, and instruction tuning to improve model interaction quality. Ensure data integrity, performance optimization, and model explainability in all deployed solutions. Participate in code reviews, documentation, and knowledge sharing to maintain engineering best practices. Requirements: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 3 years of hands-on experience working with AI/ML models, with at least 1-2 years focused on LLMs and NLP applications. Proficiency in Python and popular AI frameworks/libraries such as TensorFlow, PyTorch, Hugging Face Transformers, LangChain, or OpenAI APIs. Experience with vector databases like FAISS, Pinecone, or Weaviate for retrieval tasks. Strong knowledge of LLM fine-tuning, embeddings, tokenization, and transformer-based architectures. Solid understanding of data preprocessing, text generation, and evaluation metrics in NLP tasks. Familiarity with deployment tools and cloud platforms such as AWS, Azure, or GCP. Ability to work in a collaborative, agile environment and contribute to cross-functional AI initiatives. Preferred Skills: Experience working with RAG pipelines, multi-turn conversations, or enterprise search. Background in knowledge graphs, ontologies, or semantic search. Exposure to MLOps practices and automated model deployment pipelines. Publications or contributions in the field of AI/NLP are a plus.

Posted 1 month ago

Apply

5.0 years

3 - 37 Lacs

Mumbai Metropolitan Region

On-site

Job Summary We are seeking a skilled Python Developer with a strong foundation in Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying intelligent systems that leverage large datasets and cutting-edge ML algorithms to solve real-world problems. Key Responsibilities Design and implement machine learning models using Python and libraries like TensorFlow, PyTorch, or Scikit-learn. Perform data preprocessing, feature engineering, and exploratory data analysis. Develop APIs and integrate ML models into production systems using frameworks like Flask or FastAPI. Collaborate with data scientists, DevOps engineers, and backend teams to deliver scalable AI solutions. Optimize model performance and ensure robustness in real-time environments. Maintain clear documentation of code, models, and processes. Required Skills Proficiency in Python and ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Strong understanding of ML algorithms (classification, regression, clustering, deep learning). Experience with data pipeline tools (e.g., Airflow, Spark) and cloud platforms (AWS, Azure, or GCP). Familiarity with containerization (Docker, Kubernetes) and CI/CD practices. Solid grasp of RESTful API development and integration. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, or related field. 2–5 years of experience in Python development with a focus on AI/ML. Exposure to MLOps practices and model monitoring tools. Skills:- Python and AIML

Posted 1 month ago

Apply

3.0 years

0 Lacs

Kanpur, Uttar Pradesh, India

On-site

This role is for one of Weekday's clients Min Experience: 3 years Location: Gurugram, NCR, Delhi, Haryana, Punjab, Uttar Pradesh, Kanpur JobType: full-time Requirements About the Role: We are seeking a skilled and passionate AI Engineer with a focus on Large Language Models (LLMs) to join our advanced AI and machine learning team. As an AI Engineer, you will play a vital role in designing, developing, and deploying intelligent systems that leverage the power of LLMs to solve real-world business challenges. You will work on cutting-edge projects that utilize the latest advancements in generative AI, natural language processing, and model fine-tuning techniques. This is an exciting opportunity for professionals who thrive in a fast-paced, innovative environment and are driven by curiosity, creativity, and a desire to work on meaningful AI applications. Key Responsibilities: Design, build, and optimize AI models with a strong focus on Large Language Models (e.g., GPT, BERT, T5, LLaMA). Develop and implement RAG (Retrieval-Augmented Generation) pipelines and vector search solutions to enhance model accuracy and contextual understanding. Fine-tune and evaluate pre-trained LLMs on domain-specific datasets to meet performance and accuracy benchmarks. Build robust APIs and tools for integrating AI-powered solutions into production systems and customer-facing applications. Collaborate with data scientists, product managers, and software engineers to align AI functionality with business needs. Conduct research and stay up-to-date with the latest developments in LLMs, NLP, and generative AI. Apply techniques such as prompt engineering, few-shot learning, and instruction tuning to improve model interaction quality. Ensure data integrity, performance optimization, and model explainability in all deployed solutions. Participate in code reviews, documentation, and knowledge sharing to maintain engineering best practices. Requirements: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 3 years of hands-on experience working with AI/ML models, with at least 1-2 years focused on LLMs and NLP applications. Proficiency in Python and popular AI frameworks/libraries such as TensorFlow, PyTorch, Hugging Face Transformers, LangChain, or OpenAI APIs. Experience with vector databases like FAISS, Pinecone, or Weaviate for retrieval tasks. Strong knowledge of LLM fine-tuning, embeddings, tokenization, and transformer-based architectures. Solid understanding of data preprocessing, text generation, and evaluation metrics in NLP tasks. Familiarity with deployment tools and cloud platforms such as AWS, Azure, or GCP. Ability to work in a collaborative, agile environment and contribute to cross-functional AI initiatives. Preferred Skills: Experience working with RAG pipelines, multi-turn conversations, or enterprise search. Background in knowledge graphs, ontologies, or semantic search. Exposure to MLOps practices and automated model deployment pipelines. Publications or contributions in the field of AI/NLP are a plus.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Description Role Summary: The AI/ML Engineers will be responsible for developing, testing, and deploying machine learning models and AI algorithms that can be integrated into the product suite. The ideal candidate will have expertise in integrating large language models (LLMs) with external knowledge bases, fine-tuning model architectures for specific tasks, and optimizing retrieval strategies to improve the accuracy and relevance of generated content. Key Responsibilities: Implement and optimize AI/ML algorithms for product integration. Collaborate with data scientists to deploy models and integrate them with software products. Design scalable solutions for AI/ML applications. Develop automated pipelines for continuous model training and updates. Work with engineering teams to ensure smooth integration and operation of AI models in production. Monitor and troubleshoot AI/ML models to ensure reliability and performance. Required Experience: 4 – 6 years of hands-on experience in machine learning model development and deployment. Hands-on experience in developing and fine-tuning Generative AI models, specifically focusing on Retrieval-Augmented Generation (RAG) systems. Experience with frameworks such as Hugging Face, OpenAI, or other GenAI platforms, along with proficiency in data preprocessing and model evaluation, is required Proficiency in Python, TensorFlow, Keras, PyTorch, or other AI/ML frameworks. Experience with model deployment and monitoring in production environments. Strong problem-solving skills and experience with cloud computing platforms (AWS, GCP, Azure). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Requirements Role Summary: The AI/ML Engineers will be responsible for developing, testing, and deploying machine learning models and AI algorithms that can be integrated into the product suite. The ideal candidate will have expertise in integrating large language models (LLMs) with external knowledge bases, fine-tuning model architectures for specific tasks, and optimizing retrieval strategies to improve the accuracy and relevance of generated content. Required Experience: 4 – 6 years of hands-on experience in machine learning model development and deployment. Hands-on experience in developing and fine-tuning Generative AI models, specifically focusing on Retrieval-Augmented Generation (RAG) systems. Experience with frameworks such as Hugging Face, OpenAI, or other GenAI platforms, along with proficiency in data preprocessing and model evaluation, is required Proficiency in Python, TensorFlow, Keras, PyTorch, or other AI/ML frameworks. Experience with model deployment and monitoring in production environments. Strong problem-solving skills and experience with cloud computing platforms (AWS, GCP, Azure). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Job responsibilities Key Responsibilities: Implement and optimize AI/ML algorithms for product integration. Collaborate with data scientists to deploy models and integrate them with software products. Design scalable solutions for AI/ML applications. Develop automated pipelines for continuous model training and updates. Work with engineering teams to ensure smooth integration and operation of AI models in production. Monitor and troubleshoot AI/ML models to ensure reliability and performance. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Posted 1 month ago

Apply

0 years

0 Lacs

Raipur, Chhattisgarh, India

On-site

Overview: We at Magure are looking for an experienced ML/AI Developer to lead the development and deployment of machine learning models that solve complex business challenges. As an ML/AI Developer, you will work on designing, building, and optimizing AI-driven applications. This is an exciting opportunity to be a part of a team that values innovation, precision, and results. Roles & Responsibilities: 1. Data Management & Preprocessing: Collect, clean, and preprocess large datasets for machine learning projects. Develop automated data pipelines and manage data quality checks. 2. Model Development & Optimization: Design and implement machine learning models for real-world applications. Train, evaluate, and optimize models to ensure high accuracy and performance. Utilize both supervised and unsupervised learning techniques and evaluate algorithm effectiveness. 3. Deployment & Monitoring: Deploy models in production and monitor performance for continuous improvement. Set up model tracking, A/B testing, and version control. 4. Algorithm Research & Innovation: Stay updated with cutting-edge AI/ML algorithms, tools, and best practices. Innovate and experiment with new algorithms to improve project outcomes. 5. Collaboration & Knowledge Sharing: Work closely with data scientists, engineers, and stakeholders to understand project requirements and deliver results. Participate in team meetings, contribute to brainstorming sessions, and support junior team members. 6. Documentation & Reporting: Document model development processes, results, and performance metrics. Prepare reports and presentations to showcase findings and recommendations to management. Qualifications: Education: Bachelor’s or master’s degree in computer science, Data Science, AI, or related field. Experience: Proven experience in ML/AI development with a track record of deploying successful models. Skills: Strong proficiency in Python and ML libraries (e.g., TensorFlow, PyTorch, scikit learn). Solid understanding of data structures, algorithms, and big data processing. Experience with cloud platforms like AWS, Azure, or GCP for ML operations. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities.

Posted 1 month ago

Apply

2.5 - 5.0 years

5 - 11 Lacs

India

On-site

We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2.5 - 5 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

3.0 - 5.0 years

6 - 11 Lacs

Thiruvananthapuram

On-site

Experience Required: 3-5 years of hands-on experience in full-stack development, system design, and supporting AI/ML data-driven solutions in a production environment. Key Responsibilities Implementing Technical Designs: Collaborate with architects and senior stakeholders to understand high-level designs and break them down into detailed engineering tasks. Implement system modules and ensure alignment with architectural direction. Cross-Functional Collaboration: Work closely with software developers, data scientists, and UI/UX teams to translate system requirements into working code. Clearly communicate technical concepts and implementation plans to internal teams. Stakeholder Support: Participate in discussions with product and client teams to gather requirements. Provide regular updates on development progress and raise flags early to manage expectations. System Development & Integration: Develop, integrate, and maintain components of AI/ML platforms and data-driven applications. Contribute to scalable, secure, and efficient system components based on guidance from architectural leads. Issue Resolution: Identify and debug system-level issues, including deployment and performance challenges. Proactively collaborate with DevOps and QA to ensure resolution. Quality Assurance & Security Compliance: Ensure that implementations meet coding standards, performance benchmarks, and security requirements. Perform unit and integration testing to uphold quality standards. Agile Execution: Break features into technical tasks, estimate efforts, and deliver components in sprints. Participate in sprint planning, reviews, and retrospectives with a focus on delivering value. Tool & Framework Proficiency: Use modern tools and frameworks in your daily workflow, including AI/ML libraries, backend APIs, front-end frameworks, databases, and cloud services, contributing to robust, maintainable, and scalable systems. Continuous Learning & Contribution: Keep up with evolving tech stacks and suggest optimizations or refactoring opportunities. Bring learnings from the industry into internal knowledge-sharing sessions. Proficiency in using AI-copilots for Coding: Adaptation to emerging tools and knowledge of prompt engineering to effectively use AI for day-to-day coding needs. Technical Skills Hands-on experience with Python-based AI/ML development using libraries such as TensorFlow , PyTorch , scikit-learn , or Keras . Hands-on exposure to self-hosted or managed LLMs , supporting integration and fine-tuning workflows as per system needs while following architectural blueprints. Practical implementation of NLP/CV modules using tools like SpaCy , NLTK , Hugging Face Transformers , and OpenCV , contributing to feature extraction, preprocessing, and inference pipelines. Strong backend experience using Django , Flask , or Node.js , and API development (REST or GraphQL). Front-end development experience with React , Angular , or Vue.js , with a working understanding of responsive design and state management. Development and optimization of data storage solutions , using SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra), with hands-on experience configuring indexes, optimizing queries, and using caching tools like Redis and Memcached . Working knowledge of microservices and serverless patterns , participating in building modular services, integrating event-driven systems, and following best practices shared by architectural leads. Application of design patterns (e.g., Factory, Singleton, Observer) during implementation to ensure code reusability, scalability, and alignment with architectural standards. Exposure to big data tools like Apache Spark , and Kafka for processing datasets. Familiarity with ETL workflows and cloud data warehouse , using tools such as Airflow , dbt , BigQuery , or Snowflake . Understanding of CI/CD , containerization (Docker), IaC (Terraform), and cloud platforms (AWS, GCP, or Azure). Implementation of cloud security guidelines , including setting up IAM roles , configuring TLS/SSL , and working within secure VPC setups, with support from cloud architects. Exposure to MLOps practices , model versioning, and deployment pipelines using MLflow , FastAPI , or AWS SageMaker . Configuration and management of cloud services such as AWS EC2 , RDS , S3 , Load Balancers , and WAF , supporting scalable infrastructure deployment and reliability engineering efforts. Personal Attributes Proactive Execution and Communication: Able to take architectural direction and implement it independently with minimal rework with regular communication with stakeholders Collaboration: Comfortable working across disciplines with designers, data engineers, and QA teams. Responsibility: Owns code quality and reliability, especially in production systems. Problem Solver: Demonstrated ability to debug complex systems and contribute to solutioning. Key : Python, Django, Django ORM, HTML, CSS, Bootstrap, JavaScript, jQuery, Multi-threading, Multi-processing, Database Design, Database Administration, Cloud Infrastructure, Data Science, self-hosted LLMs Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Relevant certifications in cloud or machine learning are a plus. Package: 6-11 LPA Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹1,100,000.00 per year Schedule: Day shift Monday to Friday

Posted 1 month ago

Apply

3.0 - 4.0 years

15 - 20 Lacs

India

On-site

Job Summary: We are looking for a highly skilled and experienced AI/ML Developer with 3-4 years of hands-on experience to join our technology team. You will be responsible for designing, developing, and optimizing machine learning models that drive intelligent business solutions. The role involves close collaboration with cross-functional teams to deploy scalable AI systems and stay abreast of evolving trends in artificial intelligence and machine learning. Key Responsibilities: Develop and Implement AI/ML Models Design, build, and implement AI/ML models tailored to solve specific business challenges, including but not limited to natural language processing (NLP), image recognition, recommendation systems, and predictive analytics. Model Optimization and Evaluation * Continuously improve existing models for performance, accuracy, and scalability. Data Preprocessing and Feature Engineering * Collect, clean, and preprocess structured and unstructured data from various sources. * Engineer relevant features to improve model performance and interpretability. Collaboration and Communication * Collaborate closely with data scientists, backend engineers, product managers, and stakeholders to align model development with business goals. * Communicate technical insights clearly to both technical and non-technical stakeholders. Model Deployment and Monitoring * Deploy models to production using MLOps practices and tools (e.g., MLflow, Docker, Kubernetes). * Monitor live model performance, diagnose issues, and implement improvements as needed. Staying Current with AI/ML Advancements * Stay informed of current research, tools, and trends in AI and machine learning. * Evaluate and recommend emerging technologies to maintain innovation within the team. Code Reviews and Best Practices * Participate in code reviews to ensure code quality, scalability, and adherence to best practices. * Promote knowledge sharing and mentoring within the development team. Qualifications · Bachelor’s or Master’s degree in computer science, Data Science, Engineering, or a related field. · 3-4 years of experience in machine learning, artificial intelligence, or applied data science roles. Required Skills: · Strong programming skills in Python (preferred) and/or R. · Proficiency in ML libraries and frameworks, including: scikit-learn, XGBoost, LightGBM, TensorFlow or Keras, PyTorch · Skilled in data preprocessing and feature engineering, using; pandas, numpy, sklearn.preprocessing · Practical experience in deploying ML models into production environments using REST APIs and containers. · Familiarity with version control systems (e.g., Git) and containerization tools (e.g., Docker). · Experience working with cloud platforms such as AWS, Google Cloud Platform (GCP), or Azure. · Understanding software development methodologies, especially Agile/Scrum. · Strong analytical thinking, debugging, and problem-solving skills in real-world AI/ML applications. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Monday to Friday Morning shift Weekend availability Supplemental Pay: Performance bonus Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title : Sr. Python Developer Job Location : Hyderabad /Chennai Job Type : Full Time Experience : 8+ Yrs Notice Period - Immediate to 15 days joiners are highly preferred Design, develop, and maintain scalable Python applications and AI-powered solutions. Collaborate with cross-functional teams to understand requirements. Create, train, deploy, and optimize machine learning and AI applications. Integrate AI/ML models into production environments with robust error handling, logging, and monitoring. Implement and enhance data preprocessing pipelines, ensuring data quality for training and inference. Conduct research to explore new AI/ML technologies, frameworks, and practices to enable teams. Write clean, testable, and efficient code following best practices for software development. Debug and improve the performance, scalability, and reliability of Python-based applications. Work with cloud platforms, FastAPIs, and databases to develop end-to-end AI solutions. Proficient in Python and its libraries/frameworks (e.g., TensorFlow, PyTorch, scikit-learn, NumPy, Pandas, Flask/Django). Hands-on experience with ML algorithms (e.g., supervised and unsupervised learning, reinforcement learning). Knowledge of natural language processing (NLP), computer vision, or deep learning techniques. Familiarity with AI-driven tools and architectures. Solid understanding of data structures, preprocessing techniques, and feature engineering. Experience deploying AI/ML models using frameworks like Docker, Kubernetes, or AWS/GCP/Azure cloud services.

Posted 1 month ago

Apply

5.0 years

7 - 15 Lacs

Chennai

On-site

Job Title: Senior AI/ML Developer (Onsite) Location: Riyadh, Saudi Arabia Job Type: Full-Time, Onsite Experience Required: 5+ years Interested candidates can send their resume and a cover letter to hr@whitemastery.com or contact 9176760030 About the Role We are looking for a passionate and highly skilled Senior Artificial Intelligence and Machine Learning Developer to join our growing tech team onsite . If you are someone who thrives in a dynamic environment, enjoys solving complex problems using intelligent systems, and wants to be part of building cutting-edge solutions that drive real-world impact — we want to hear from you. This role demands strong technical leadership, a proactive mindset, and the ability to architect and deploy robust AI/ML models. As a senior member of the team, you will also mentor juniors and actively collaborate with cross-functional teams. Key Responsibilities Design, develop, and deploy AI/ML models and systems for real-world applications Build end-to-end pipelines including data preprocessing, model training, validation, and deployment Research and implement advanced ML algorithms, deep learning models, and NLP techniques Collaborate with data scientists, product managers, and software engineers to deliver AI-powered features Ensure scalability, reliability, and performance of deployed models Stay ahead of the curve with the latest developments in AI/ML and drive innovation Mentor junior developers and help shape technical best practices within the team Key Skills & Qualifications Strong programming skills in Python and Golang Hands-on experience with ML frameworks such as PyTorch, TensorFlow, Scikit-learn Solid grasp of machine learning, deep learning, NLP , and model interpretability techniques Experience with model deployment, MLOps , and version control tools Strong understanding of data structures, algorithms, and system design Experience working with cloud platforms (AWS/GCP/Azure) is a plus Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field Excellent problem-solving, analytical, and communication skills Minimum 5 years of experience in AI/ML development, with proven track record in delivering production-grade models Job Type: Contractual / Temporary Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Paid sick time Paid time off Application Question(s): Do you hold a valid passport? When can you join us? Experience: AI and ML: 3 years (Preferred) Work Location: In person

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies