Jobs
Interviews

1937 Preprocessing Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

India

Remote

Location: Remote Experience: 8+ Years Job Type: Contract Job Overview: We are seeking a highly skilled Machine Learning Architect to design and implement cutting-edge AI/ML solutions that drive business innovation and operational efficiency. The ideal candidate will have deep expertise in Google Cloud Platform , Gurobi , and Google OR-Tools , with a proven ability to build scalable, optimized machine learning models for complex decision-making processes. Key Responsibilities: Design and develop robust machine learning architectures aligned with business objectives. Implement optimization models using Gurobi and Google OR to address complex operational problems. Leverage Google Cloud AI/ML services (Vertex AI, TensorFlow, AutoML) for scalable model training and deployment. Build automated pipelines for data preprocessing, model training, evaluation, and deployment. Ensure high-performance computing and efficient resource usage in cloud environments. Collaborate with data scientists, ML engineers, and business stakeholders to integrate ML solutions into production. Monitor, retrain, and enhance model performance to maintain accuracy and efficiency. Stay current with emerging AI/ML trends, tools, and best practices. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. 5+ years of experience in machine learning solution architecture and deployment. Strong hands-on experience with Google Cloud AI/ML services (Vertex AI, AutoML, BigQuery, etc.). Deep expertise in optimization modeling using Gurobi and Google OR-Tools . Proficiency in Python, TensorFlow, PyTorch, and ML libraries/frameworks. Solid understanding of big data processing frameworks (e.g., Apache Spark , BigQuery ). Excellent problem-solving skills with the ability to work across cross-functional teams. Preferred Qualifications: PhD in Machine Learning, Artificial Intelligence, or a related field. Experience with Reinforcement Learning and complex optimization algorithms. Working knowledge of MLOps , CI/CD pipelines, and Kubernetes for model lifecycle management. Familiarity with Google Cloud security best practices and identity/access management. Unlock more with Plus

Posted 2 weeks ago

Apply

20.0 years

3 - 4 Lacs

Delhi

On-site

Job Title: Data Analytics cum Data Science Trainer Location : South Extension & Preet Vihar, Delhi Job Type : Full-Time/ Part Time | Classroom Training About Us TGC India through it's technical wings Pythontraining.net and Datasciencetraining.co.in is a leading training institute in Delhi, shaping careers in Digital Media, IT, and Emerging Technologies for over 20 years. With state-of-the-art infrastructure and a strong industry connect, we empower students to build successful careers in Data Science, Machine Learning, AI, and more. We are expanding our academic team and looking for a passionate Data Analytics cum Data Science Trainer who can inspire learners and deliver cutting-edge training. Key Responsibilities Deliver classroom training on Data Analytics and Data Science tools and technologies (Python, R, SQL, Power BI, Tableau, Machine Learning, etc.). Teach concepts like data preprocessing, visualization, statistical modeling, and deployment of ML models. Guide students on real-world projects , internships, and portfolio building. Continuously upgrade course content to keep pace with industry trends and tools (including AI & Gen AI apps). Mentor and evaluate students to ensure high success and placement rates. Collaborate with the academic team to create assignments, assessments, and capstone projects. Desired Candidate Profile Education : Bachelor’s/Master’s degree in Computer Science, Data Science, Statistics, or related field is preferred but not essential Experience : 2+ years in Data Science/Data Analytics roles or teaching experience. Strong hands-on knowledge of Python , Machine Learning , Pandas , NumPy , Scikit-learn , Power BI/Tableau , and SQL . Knowledge of Deep Learning frameworks (Keras/TensorFlow/PyTorch) is a plus. Excellent communication and presentation skills to deliver training effectively. Passionate about teaching and mentoring students. Why Join TGC India? Opportunity to train on latest tools and technologies including AI & Gen AI apps Work in a modern classroom setup with smart labs and high-tech systems Attractive salary package with incentives based on student success rates Access to continuous learning and certification programs Be a part of Delhi’s top institute for Data Science & Analytics education Perks & Benefits ✅ Competitive Salary ✅ Performance-based incentives ✅ Free access to premium tools & courses for self-learning ✅ Paid leaves and festive holidays ✅ Professional growth opportunities Job Locations South Extension - Delhi Preet Vihar - Delhi Apply Now If you are passionate about teaching and shaping the next generation of Data Scientists, we want to hear from you! Send your CV to info@tgcindia.com or manojk.tgc@gmail.com Whatsapp at 9810031162 (only message) About TGC TGC India is one of Delhi’s most trusted institutes for career training in Data Science, Analytics, Digital Media, Digital Marketing, and more. Join our mission to transform lives through quality education. Job Types: Full-time, Part-time Pay: ₹300,000.00 - ₹450,000.00 per year Expected hours: 16 per week Benefits: Cell phone reimbursement Internet reimbursement Leave encashment Work Location: In person

Posted 2 weeks ago

Apply

5.0 years

10 Lacs

Calcutta

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now!

Posted 2 weeks ago

Apply

2.0 years

3 - 5 Lacs

Indore

On-site

🤖 Job Title: AI/ML Developer Trainer 📍 Location: [Indore] 🕒 Job Type: Full-time 💰 Salary: ₹30000-40000 ( based on experience ) About the Role: We are seeking an experienced and passionate AI/ML Developer Trainer to train aspiring developers and working professionals in the field of Artificial Intelligence and Machine Learning . The trainer should have strong domain knowledge, practical experience, and the ability to explain complex concepts in a clear and engaging manner. Key Responsibilities: Deliver hands-on training sessions on AI/ML fundamentals and applications Teach key tools and libraries such as Python, NumPy, Pandas, Scikit-learn, TensorFlow, Keras, PyTorch Explain and guide learners through machine learning algorithms, deep learning, NLP, and model deployment Design and update course materials, projects, assessments, and real-world case studies Provide mentorship and support for capstone projects and job-readiness Conduct live classes, Q&A sessions, workshops, and doubt-clearing sessions (online or offline) Stay updated with the latest trends in AI/ML and integrate them into the curriculum Required Skills: Proficiency in Python and AI/ML libraries (Scikit-learn, TensorFlow, PyTorch, etc.) Strong understanding of supervised/unsupervised learning, neural networks, CNNs, RNNs, etc. Familiarity with data preprocessing, model evaluation, deployment (Flask, Streamlit, FastAPI) Excellent presentation and communication skills Ability to explain technical topics to both beginners and intermediate learners Preferred Qualifications: Bachelor’s/Master’s degree in Computer Science, Data Science, AI/ML, or related field 2+ years of experience as an AI/ML developer or trainer Prior experience in teaching, mentoring, bootcamps, or ed-tech platforms Knowledge of cloud platforms (AWS, GCP, or Azure) and MLOps tools is a plus To Apply: Please send your resume, LinkedIn, and demo session (if available) to [ hr@dflyinternational.com ] Subject Line: Application for AI/ML Trainer – [Your Name]

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Science Instructor - Data Mining to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly detection . ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Mining Engineer to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly detection. ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Preferred Qualifications: ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer , or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. ● To know more about us, feel free to explore our website: https://www.newtonschool.co/ We look forward to the possibility of having you join our academic team and help shape the future of tech education! ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. ● To know more about us, feel free to explore our website: Newton School of Technology. We look forward to the possibility of having you join our academic team and help shape the future of tech education!

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Data Scientists for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams. Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights. Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights. Own the full lifecycle of data science projects—from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring. Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models. Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling. Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure). Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen. Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink). Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices. Contributions to open-source projects or published research. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

India

Remote

About Us: At Soul AI, we develop AI solutions with real-world benefits. Built by a top-tier team from IITs and IIMs, and operating out of SF and Hyderabad, our AI team tackles meaningful problems at scale. We’re hiring an AI/ML Engineer with strong Python and machine learning skills. Key Responsibilities: Design and train machine learning models using Python. Perform data preprocessing and feature engineering. Optimize models for accuracy and performance. Deploy models in production environments. Required Qualifications: 2+ years of experience in ML/AI using Python. Strong grasp of scikit-learn, TensorFlow, or PyTorch. Experience with deployment tools and APIs. Why Join Us? Competitive pay (₹1200/hour). Flexible hours. Remote opportunity.

Posted 2 weeks ago

Apply

3.0 years

4 - 10 Lacs

Chandigarh, Chandigarh

On-site

Job Title: Data Scientist Location: Chandigarh (Work from Office) Experience Required: 3+ Years Company: SparkBrains Private Limited Job Description: We are looking for a Data Scientist with 3+ years of hands-on experience in Machine Learning, Deep Learning, and Large Language Models (LLMs). The ideal candidate should possess strong analytical skills, expertise in data modeling, and the ability to develop and deploy AI-driven solutions that add value to our business and clients. Key Responsibilities: 1) Data Collection & Preprocessing: Gather, clean, and prepare structured and unstructured data for model training and evaluation. 2) Model Development: Design, develop, and optimize machine learning and deep learning models to solve real-world business problems. 3) LLM Integration: Build, fine-tune, and deploy Large Language Models (LLMs) for various NLP tasks, including text generation, summarization, and sentiment analysis. 4) Feature Engineering: Identify relevant features and implement feature extraction techniques to improve model accuracy. 5) Model Evaluation & Optimization: Conduct rigorous evaluation and fine-tuning to enhance model performance and ensure scalability. 6) Data Visualization & Insights: Create dashboards and reports to communicate findings and insights effectively to stakeholders. 7) API Development & Deployment: Develop APIs and integrate AI/ML models into production systems. 8) Collaboration & Documentation: Collaborate with cross-functional teams to understand business requirements and document all processes and models effectively. Required Skills & Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or a related field. Experience: Minimum 3+ years of proven experience as a Data Scientist/AI Engineer. Technical Skills: 1) Proficiency in Python and relevant ML/AI libraries such as TensorFlow, PyTorch, Scikit-Learn, etc. 2) Hands-on experience with LLMs (e.g., OpenAI, Hugging Face, etc.) and fine-tuning models. 3) Strong understanding of Natural Language Processing (NLP), neural networks, and deep learning architectures. 4) Knowledge of data wrangling, data visualization, and feature engineering techniques. 5) Experience with APIs, cloud platforms (AWS, Azure, or GCP), and deployment of AI models in production. 6) Analytical & Problem-Solving Skills: Strong analytical mindset with the ability to interpret data and derive actionable insights. 7) Communication Skills: Excellent verbal and written communication skills to effectively collaborate with technical and non-technical teams. Why Join Us? - Opportunity to work on cutting-edge AI/ML projects. - Collaborative work environment with a focus on continuous learning. - Exposure to diverse industries and domains. - Competitive salary and growth opportunities. Job Types: Full-time, Permanent Pay: ₹480,000.00 - ₹1,000,000.00 per year Schedule: Day shift Monday to Friday Weekend availability Work Location: In person

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Senior Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introducing Thinkproject Platform Pioneering a new era and offering a cohesive alternative to the fragmented landscape of construction software, Thinkproject seamlessly integrates the most extensive portfolio of mature solutions with an innovative platform, providing unparalleled features, integrations, user experiences, and synergies. By combining information management expertise and in-depth knowledge of the building, infrastructure, and energy industries, Thinkproject empowers customers to efficiently deliver, operate, regenerate, and dispose of their built assets across their entire lifecycle through a Connected Data Ecosystem. We are seeking a hands-on Applied Machine Learning Engineer to join our team and lead the development of ML-driven insights from historical data in our contracts management, assets management and common data platform. This individual will work closely with our data engineering and product teams to design, develop, and deploy scalable machine learning models that can parse, learn from, and generate value from both structured and unstructured contract data. You will use BigQuery and its ML capabilities (including SQL and Python integrations) to prototype and productionize models across a variety of NLP and predictive analytics use cases. Your work will be critical in enhancing our platform’s intelligence layer, including search, classification, recommendations, and risk detection. What your day will look like Key Responsibilities Model Development: Design and implement machine learning models using structured and unstructured historical contract data to support intelligent document search, clause classification, metadata extraction, and contract risk scoring. BigQuery ML Integration: Build, train, and deploy ML models directly within BigQuery using SQL and/or Python, leveraging native GCP tools (e.g., Vertex AI, Dataflow, Pub/Sub). Data Preprocessing & Feature Engineering: Clean, enrich, and transform raw data (e.g., legal clauses, metadata, audit trails) into model-ready features using scalable and efficient pipelines. Model Evaluation & Experimentation: Conduct experiments, model validation, A/B testing, and iterate based on precision, recall, F1-score, RMSE, etc. Deployment & Monitoring: Operationalize models in production environments with monitoring, retraining pipelines, and CI/CD best practices for ML (MLOps). Collaboration: Work cross-functionally with data engineers, product managers, legal domain experts, and frontend teams to align ML solutions with product needs. What you need to fulfill the role Skills And Experience Education: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field. ML Expertise: Strong applied knowledge of supervised and unsupervised learning, classification, regression, clustering, feature engineering, and model evaluation. NLP Experience: Hands-on experience working with textual data, especially in NLP use cases like entity extraction, classification, and summarization. GCP & BigQuery: Proficiency with Google Cloud Platform, especially BigQuery and BigQuery ML; comfort querying large-scale datasets and integrating with external ML tooling. Programming: Proficient in Python and SQL; familiarity with libraries such as Scikit-learn, TensorFlow, PyTorch, Keras. MLOps Knowledge: Experience with model deployment, monitoring, versioning, and ML CI/CD best practices. Data Engineering Alignment: Comfortable working with data pipelines and tools like Apache Beam, Dataflow, Cloud Composer, and pub/sub systems. Version Control: Strong Git skills and experience collaborating in Agile teams. Preferred Qualifications Experience working with contractual or legal text datasets. Familiarity with document management systems, annotation tools, or enterprise collaboration platforms. Exposure to Vertex AI, LangChain, RAG-based retrieval, or embedding models for Gen AI use cases. Comfortable working in a fast-paced, iterative environment with changing priorities. What we offer Lunch 'n' Learn Sessions I Women's Network I LGBTQIA+ Network I Coffee Chat Roulette I Free English Lessons I Thinkproject Academy I Social Events I Volunteering Activities I Open Forum with Leadership Team (Tp Café) I Hybrid working I Unlimited learning We are a passionate bunch here. To join Thinkproject is to shape what our company becomes. We take feedback from our staff very seriously and give them the tools they need to help us create our fantastic culture of mutual respect. We believe that investing in our staff is crucial to the success of our business. Your contact: Mehal Mehta Please submit your application, including salary expectations and potential date of entry, by submitting the form on the next page. Working at thinkproject.com - think career. think ahead.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Responsibilities Analyze raw data: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with engineering team to bring analytical prototypes to production Generate actionable insights for business improvements "Statistical Modeling: Develop and implement core statistical models, including linear and logistic regression, decision trees, and various classification algorithms. Analyze and interpret model outputs to inform business decisions. Advanced NLP: Work on complex NLP tasks, including data cleansing, text preprocessing, and feature engineering. Develop models for text classification, sentiment analysis, and entity recognition. LLM Integration: Design and optimize pipelines for integrating Large Language Models (LLMs) into applications, with a focus on Retrieval-Augmented Generation (RAG) systems. Work on fine-tuning LLMs to enhance their performance on domain-specific tasks. ETL Processes: Design ETL (Extract, Transform, Load) processes to ensure that data is accurately extracted from various sources, transformed into usable formats, and loaded into data warehouses or databases for analysis. BI Reporting and SQL: Collaborate with BI teams to ensure that data pipelines support efficient reporting. Write complex SQL queries to extract, analyze, and visualize data for business intelligence reports. Ensure that data models are optimized for reporting and analytics. Data Storage and Management: Collaborate with data engineers to design and implement efficient storage solutions for structured datasets and semi structured text datasets. Ensure that data is accessible, well-organized, and optimized for retrieval. Model Evaluation and Optimization: Regularly evaluate models using appropriate metrics and improve them through hyperparameter tuning, feature selection, and other optimization techniques. Deploy models in production environments and monitor their performance. Collaboration: Work closely with cross-functional teams, including software engineers, data engineers, and product managers, to integrate models into applications and ensure they meet business requirements. Innovation: Stay updated with the latest advancements in machine learning, NLP, and data engineering. Experiment with new algorithms, tools, and frameworks to continuously enhance the capabilities of our models and data processes." Qualifications Overall Experience: 5+ years of overall experience working in a modern software engineering environment with exposure to best practices in code management, devops and cloud data/ML engineering. Proven track record of developing and deploying machine learning models in production. ML Experience: 3+ years of experience in machine learning engineering, data science with a focus on fundamental statistical modeling. Experience in feature engineering, basic model tuning and understanding model drift over time. Strong foundations in statistics for applied ML. Data Experience: 1+ year(s) in building data engineering ETL processes, and BI reporting. NLP Experience: 1+ year(s) of experience working on NLP use cases, including large scale text data processing, storage and fundamental NLP models for text classification, topic modeling and/or more recently, LLM models and their applications Core Technical Skills: Proficiency in Python and relevant ML and/or NLP specific libraries. Strong SQL skills for data querying, analysis, and BI reporting. Experience with ETL tools and data pipeline management. BI Reporting: Experience in designing and optimizing data models for BI reporting, using tools like Tableau, Power BI, or similar. Education: Bachelor’s or Master’s degree in Computer Science / Data Science,

Posted 2 weeks ago

Apply

0.0 - 20.0 years

3 - 4 Lacs

Delhi, Delhi

On-site

Job Title: Data Analytics cum Data Science Trainer Location : South Extension & Preet Vihar, Delhi Job Type : Full-Time/ Part Time | Classroom Training About Us TGC India through it's technical wings Pythontraining.net and Datasciencetraining.co.in is a leading training institute in Delhi, shaping careers in Digital Media, IT, and Emerging Technologies for over 20 years. With state-of-the-art infrastructure and a strong industry connect, we empower students to build successful careers in Data Science, Machine Learning, AI, and more. We are expanding our academic team and looking for a passionate Data Analytics cum Data Science Trainer who can inspire learners and deliver cutting-edge training. Key Responsibilities Deliver classroom training on Data Analytics and Data Science tools and technologies (Python, R, SQL, Power BI, Tableau, Machine Learning, etc.). Teach concepts like data preprocessing, visualization, statistical modeling, and deployment of ML models. Guide students on real-world projects , internships, and portfolio building. Continuously upgrade course content to keep pace with industry trends and tools (including AI & Gen AI apps). Mentor and evaluate students to ensure high success and placement rates. Collaborate with the academic team to create assignments, assessments, and capstone projects. Desired Candidate Profile Education : Bachelor’s/Master’s degree in Computer Science, Data Science, Statistics, or related field is preferred but not essential Experience : 2+ years in Data Science/Data Analytics roles or teaching experience. Strong hands-on knowledge of Python , Machine Learning , Pandas , NumPy , Scikit-learn , Power BI/Tableau , and SQL . Knowledge of Deep Learning frameworks (Keras/TensorFlow/PyTorch) is a plus. Excellent communication and presentation skills to deliver training effectively. Passionate about teaching and mentoring students. Why Join TGC India? Opportunity to train on latest tools and technologies including AI & Gen AI apps Work in a modern classroom setup with smart labs and high-tech systems Attractive salary package with incentives based on student success rates Access to continuous learning and certification programs Be a part of Delhi’s top institute for Data Science & Analytics education Perks & Benefits ✅ Competitive Salary ✅ Performance-based incentives ✅ Free access to premium tools & courses for self-learning ✅ Paid leaves and festive holidays ✅ Professional growth opportunities Job Locations South Extension - Delhi Preet Vihar - Delhi Apply Now If you are passionate about teaching and shaping the next generation of Data Scientists, we want to hear from you! Send your CV to info@tgcindia.com or manojk.tgc@gmail.com Whatsapp at 9810031162 (only message) About TGC TGC India is one of Delhi’s most trusted institutes for career training in Data Science, Analytics, Digital Media, Digital Marketing, and more. Join our mission to transform lives through quality education. Job Types: Full-time, Part-time Pay: ₹300,000.00 - ₹450,000.00 per year Expected hours: 16 per week Benefits: Cell phone reimbursement Internet reimbursement Leave encashment Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Thiruvarur, Tamil Nadu, India

On-site

Job Title: Data Science Engineer – Thiruvarur (On-Site) Are you passionate about AI, Machine Learning, and Data Engineering? Join us as a Data Science Engineer and work on cutting-edge projects that blend ML , agentic AI systems , Java development , and data visualization . 🧠 Location: Thiruvarur (On-site) 📅 Type: full time 🌟 What You’ll Do: Assist in developing and training ML models (classification, NLP, prediction). Build data pipelines and preprocessing workflows using Python or Java . Contribute to agentic AI systems (e.g., AutoGPT, LangChain agents). Create insightful visualizations to communicate data-driven insights. Perform EDA to uncover trends and patterns. Help integrate AI modules into Java-based backend systems . Participate in AI/ML research, prototyping, and documentation. ✅ What We’re Looking For: Pursuing a degree in Computer Science , AI , Data Science , or related field. Strong in Java and Python programming. Familiarity with ML tools (scikit-learn, TensorFlow, PyTorch). Passion or experience in LLM-based agent systems . Good understanding of data structures , algorithms , and coding best practices. Experience with data viz tools like Matplotlib, Seaborn, or Plotly. Curious, collaborative, and eager to learn cutting-edge tech.

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description: JD: AI/ML Engineer JOB Responsibilities: Develop and implement machine learning models and algorithms for [use cases, e.g., Anomaly Detections systems, natural language processing and Statistics with Algorithms solutions using Datamining, Classification, Regression, and clustering algorithms. Collaborate with cross-functional teams (data scientists, software engineers, product managers) to integrate ML solutions into production systems. Optimize and fine-tune models for performance, scalability, and efficiency. Conduct exploratory data analysis (EDA) and feature engineering. Stay up to date with the latest advancements in ML and AI. JOB Requirements: Bachelor’s or master’s degree in computer science, Data Science, or related field. Must have Strong programming skills in Python and experience with libraries such as TensorFlow, PyTorch, Spark, Pyspark, or scikit-learn, matplotlib. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of machine learning concepts (supervised, unsupervised, deep learning and Natural Language Processing). Experience with cloud platforms (Azure, AWS) and distributed computing. Excellent problem-solving skills and ability to work independently. Weekly Hours: 40 Time Type: Regular Location: Bangalore, Karnataka, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. Job ID R-72664 Date posted 07/17/2025 Benefits Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits. Paid Time Off Tuition Assistance Insurance Options Discounts Training & Development

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location Bengaluru, Karnataka, India Job ID R-231693 Date posted 17/07/2025 Job Title: IT Manager, Generative AI Engineer Introduction to role: Are you ready to lead the charge in revolutionizing AI solutions? As an IT Manager specializing in Generative AI Engineering, you'll be at the forefront of innovation within Alexion's IT RDU organization. Reporting to the IT Director of Data Science, your mission will be to develop and implement cutting-edge Machine Learning solutions, with a keen focus on Generative AI. Your contributions will be pivotal in scaling and deploying AI models that tackle our most pressing business challenges. Are you up for the challenge? Accountabilities: Lead the development and refinement of Generative AI systems and applications using state-of-the-art models to solve complex business problems. Contribute to the development and maintenance of MLOps infrastructure and tools for Machine Learning and Generative AI models. Participate in the full lifecycle development of Generative AI agents, ensuring meaningful interaction with users or environments. Support the integration of pre-trained Generative AI models into custom applications for new functionalities. Collaborate with senior engineers to design interactive systems utilizing Generative AI for dynamic content or decision support. Contribute to building and maintaining infrastructure for running Generative AI applications. Aid in implementing user interfaces or API endpoints for seamless interaction with Generative AI functionalities. Assist with data pipelines and preprocessing steps for feeding Generative AI models with appropriate data. Help monitor the operational health of Generative AI applications in production, identifying areas for improvement or optimization. Contribute to developing assets, frameworks, and reusable components for Generative AI, Machine Learning, and MLOps practices. Engage with stakeholders to gather feedback and iteratively improve Generative AI applications' features and capabilities. Support the integration of Generative AI models within the Machine Learning ecosystem, ensuring ethical use and business relevance. Assist with monitoring and governance of Generative AI models to align with evolving business and regulatory requirements. Essential Skills/Experience: Bachelor's degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field 3+ years of experience in Machine Learning, data science, or software development Familiarity with ML Engineering concepts and tools, including Docker/containerization, and basic model upkeep Exposure to CICD practices and familiarity with tools such as GitHub Actions, Jenkins, or Bitbucket pipelines Experience with Python and familiarity with ML libraries such as Tensorflow, Keras, or PyTorch Experience with cloud computing platforms, such as AWS, Azure or GCP Understanding of software engineering principles and exposure to agile methodologies Good communication and collaboration skills, with enthusiasm for working in a cross-functional team environment Proactive and eager to learn with strong analytical and problem-solving skills Desirable Skills/Experience: Experience in the pharmaceutical industry or related fields Master's degree in a related field or ongoing education towards an advanced degree Strong knowledge of AI techniques and tools, including NLP, optimization, and deep learning When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, you'll find an environment where innovation thrives. Our commitment to rare diseases means your work will have a profound impact on patients' lives. With a rapidly expanding portfolio, you'll enjoy the entrepreneurial spirit of a leading biotech combined with the resources of a global biopharma. Here, your career is not just a path but a journey filled with opportunities for growth and connection. Ready to make a difference? Apply now and join us on this exciting journey! Date Posted 18-Jul-2025 Closing Date 30-Jul-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location Bengaluru, Karnataka, India Job ID R-231692 Date posted 17/07/2025 Job Title: Senior Manager IT, Generative AI Engineer Introduction to role: Are you ready to lead the charge in transforming business challenges into innovative solutions? Play a pivotal role as a Senior Manager in Alexion IT RDU by working with the IT Director of Data Science. Your mission? To lead technically and strategically in crafting and deploying brand new Machine Learning solutions, with a particular emphasis on Generative AI. Your expertise will be key in architecting, scaling, and deploying impactful AI models that drive our business forward. Accountabilities: Lead the development and refinement of Generative AI systems and applications using brand new models to tackle complex business problems. Drive the design, development, and continuous improvement of robust infrastructure and tools for Machine Learning and Generative AI models, ensuring scalability and efficiency. Provide technical mentorship for the full lifecycle development of Generative AI agents, from conceptualization to deployment, ensuring meaningful, reliable, and secure interactions. Architect and lead the integration of pre-trained Generative AI models into custom applications to enable innovative functionalities and enhance existing capabilities. Work with senior engineers to craft interactive systems using Generative AI for dynamic content or decision support, encouraging innovation and standard methodologies. Build and optimize scalable and resilient infrastructure for running Generative AI applications in production, ensuring high performance, cost-efficiency, and reliability. Oversee the implementation of user interfaces or API endpoints for seamless interaction with Generative AI functionalities, ensuring a positive user experience. Design and oversee data pipelines and preprocessing steps for feeding Generative AI models with high-quality data for generation tasks. Define, implement, and monitor the operational health and performance of Generative AI applications in production, driving continuous improvement based on performance metrics. Contribute to the development of assets, frameworks, and reusable components for Generative AI, Machine Learning, and MLOps practices. Engage with diverse stakeholders to translate complex business needs into clear technical requirements, gather feedback, and iteratively drive improvements for Generative AI applications. Support the integration of Generative AI models within the Machine Learning ecosystem, ensuring ethical use and business relevance. Assist with monitoring and governance of Generative AI models to align with evolving business and regulatory requirements. Essential Skills/Experience: Bachelor's degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field 6+ years of experience in Machine Learning, data science, or software development Strong understanding and practical experience with advanced ML Engineering concepts and tools, including Docker/containerization, orchestration (e.g., Kubernetes), and robust MLOps practices for end-to-end model lifecycle management Demonstrated experience designing, implementing, and optimizing robust CI/CD pipelines using tools such as GitHub Actions, Jenkins, or Bitbucket pipelines for automated ML model deployment and continuous integration Proficiency in Python programming, coupled with deep experience in utilizing and optimizing leading ML libraries such as TensorFlow, Keras, or PyTorch for complex Generative AI applications and research Experience with cloud computing platforms, such as AWS, Azure or GCP Comprehensive understanding of software engineering principles, design patterns, and best practices (e.g., modular design, testing, code review), with proven experience leading or significantly contributing within agile development methodologies Proficient in communication and collaboration, eager for diverse team work, articulate technical concepts to diverse groups, motivate teams to achieve project goals. Proactive and eager to learn with strong analytical and problem-solving skills Desirable Skills/Experience: Experience in the pharmaceutical industry or related fields Master’s or PhD degree in a related field (e.g., Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a closely related quantitative subject area) Proven track record of successfully leading and delivering complex, production-grade Generative AI projects from ideation and research to deployment and operationalization When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca's Alexion division, your career journey is intertwined with a mission that truly matters. Our passion drives us to innovate continuously while creating meaningful value. Here you will find an energizing culture where connections are built to explore new ideas. With tailored development programs designed not just for skill enhancement but also for fostering a deep understanding of our patient's journeys. You will be part of a community that celebrates diversity and innovation while making a difference where it truly counts. Ready to make an impact? Apply now to join our team! Date Posted 18-Jul-2025 Closing Date 30-Jul-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location Bengaluru, Karnataka, India Job ID R-231691 Date posted 17/07/2025 Job Title: Senior Manager IT, Generative AI Engineer Introduction to role: Are you ready to lead the charge in transforming business challenges into innovative solutions? Play a pivotal role as a Senior Manager in Alexion IT RDU by working with the IT Director of Data Science. Your mission? To lead technically and strategically in crafting and deploying brand new Machine Learning solutions, with a particular emphasis on Generative AI. Your expertise will be key in architecting, scaling, and deploying impactful AI models that drive our business forward. Accountabilities: Lead the development and refinement of Generative AI systems and applications using brand new models to tackle complex business problems. Drive the design, development, and continuous improvement of robust infrastructure and tools for Machine Learning and Generative AI models, ensuring scalability and efficiency. Provide technical mentorship for the full lifecycle development of Generative AI agents, from conceptualization to deployment, ensuring meaningful, reliable, and secure interactions. Architect and lead the integration of pre-trained Generative AI models into custom applications to enable innovative functionalities and enhance existing capabilities. Work with senior engineers to craft interactive systems using Generative AI for dynamic content or decision support, encouraging innovation and standard methodologies. Build and optimize scalable and resilient infrastructure for running Generative AI applications in production, ensuring high performance, cost-efficiency, and reliability. Oversee the implementation of user interfaces or API endpoints for seamless interaction with Generative AI functionalities, ensuring a positive user experience. Design and oversee data pipelines and preprocessing steps for feeding Generative AI models with high-quality data for generation tasks. Define, implement, and monitor the operational health and performance of Generative AI applications in production, driving continuous improvement based on performance metrics. Contribute to the development of assets, frameworks, and reusable components for Generative AI, Machine Learning, and MLOps practices. Engage with diverse stakeholders to translate complex business needs into clear technical requirements, gather feedback, and iteratively drive improvements for Generative AI applications. Support the integration of Generative AI models within the Machine Learning ecosystem, ensuring ethical use and business relevance. Assist with monitoring and governance of Generative AI models to align with evolving business and regulatory requirements. Essential Skills/Experience: Bachelor's degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field 6+ years of experience in Machine Learning, data science, or software development Strong understanding and practical experience with advanced ML Engineering concepts and tools, including Docker/containerization, orchestration (e.g., Kubernetes), and robust MLOps practices for end-to-end model lifecycle management Demonstrated experience designing, implementing, and optimizing robust CI/CD pipelines using tools such as GitHub Actions, Jenkins, or Bitbucket pipelines for automated ML model deployment and continuous integration Proficiency in Python programming, coupled with deep experience in utilizing and optimizing leading ML libraries such as TensorFlow, Keras, or PyTorch for complex Generative AI applications and research Experience with cloud computing platforms, such as AWS, Azure or GCP Comprehensive understanding of software engineering principles, design patterns, and best practices (e.g., modular design, testing, code review), with proven experience leading or significantly contributing within agile development methodologies Proficient in communication and collaboration, eager for diverse team work, articulate technical concepts to diverse groups, motivate teams to achieve project goals. Proactive and eager to learn with strong analytical and problem-solving skills Desirable Skills/Experience: Experience in the pharmaceutical industry or related fields Master’s or PhD degree in a related field (e.g., Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a closely related quantitative subject area) Proven track record of successfully leading and delivering complex, production-grade Generative AI projects from ideation and research to deployment and operationalization When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca's Alexion division, your career journey is intertwined with a mission that truly matters. Our passion drives us to innovate continuously while creating meaningful value. Here you will find an energizing culture where connections are built to explore new ideas. With tailored development programs designed not just for skill enhancement but also for fostering a deep understanding of our patient's journeys. You will be part of a community that celebrates diversity and innovation while making a difference where it truly counts. Ready to make an impact? Apply now to join our team! Date Posted 18-Jul-2025 Closing Date 30-Jul-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

As a motivated and highly skilled Data Scientist joining the Flight Safety team, your role will be crucial in utilizing advanced analytics and machine learning techniques to drive safety-related decisions and enhance operational processes. You will primarily focus on analyzing time-series and multivariate data to detect anomalies, identify patterns, and create predictive models that contribute to improving flight safety and minimizing risks. Your responsibilities will include conducting exploratory data analysis to reveal insights from complex datasets, applying advanced anomaly detection techniques to spot irregularities and potential safety risks, and developing machine learning and deep learning models like ANN, RNN, and CNN for predictive safety analytics. Using big data analytics tools and AI/ML frameworks, you will support data-driven decision-making within the field of flight safety. Additionally, you will design custom algorithms tailored to meet specific domain requirements with precision and reliability, and write efficient, maintainable code in Python and MATLAB for data processing, modeling, and algorithm deployment. Collaboration with cross-functional teams will be essential in translating analytical findings into actionable safety initiatives. To qualify for this role, you should hold a Bachelor's degree in Engineering with a specialization in Data Science or a related field, along with certification in Data Science. With a minimum of 6 years of practical experience in data science, machine learning, or analytics roles, you should have a proven track record in developing AI/ML models, particularly in anomaly detection and predictive analytics. Proficiency in data analysis, exploratory data analysis, anomaly detection (time-series & multivariate data), machine learning, deep learning, big data analytics, and AI-driven risk forecasting is crucial. Strong programming skills in Python and MATLAB, coupled with expertise in custom algorithm design, data collection, preprocessing, and integration, are also required. By joining our team, you will have the opportunity to contribute to cutting-edge data science projects in the aviation safety sector, work in a collaborative and innovation-oriented environment, and be part of a mission-critical team dedicated to predictive safety and operational excellence. This is a full-time, permanent position with benefits including cell phone reimbursement, provided food, health insurance, paid sick time, paid time off, provident fund, and the option to work from home. The work schedule is during the day, and the work location is in person.,

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

India

On-site

Company Description 👋🏼We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in Job Description REQUIREMENTS: Total experience:7+years. Strong working knowledge in Machine Learning and NLP. Strong hands-on programming skills in Python, with expertise in libraries such as scikit-learn, pandas, NumPy, TensorFlow/PyTorch. Deep understanding of supervised and unsupervised learning, NLP, deep learning, and model evaluation techniques. Proven experience in designing and deploying ML pipelines and data workflows at scale. Proficiency with MLOps tools. Experience with version control, containerization, and CI/CD in ML environments. Strong understanding of data preprocessing, feature selection, and model interpretability techniques. Experience with cloud platforms (AWS, GCP, Azure) for ML deployment. Familiarity with generative AI models and LLMs. Excellent communication and collaboration skills. RESPONSIBILITIES: Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Role Overview We are seeking a highly skilled Data Scientist specializing in Generative AI to design, develop, and deploy state-of-the-art AI models solving real-world business challenges. You will work extensively with Large Language Models (LLMs), Generative Adversarial Networks (GANs), Retrieval-Augmented Generation (RAG) frameworks, and transformer architectures to create production-ready solutions across various Responsibilities : Design, develop, and fine-tune advanced Generative AI models such as LLMs, GANs, and Diffusion models. Implement and enhance RAG and transformer-based architectures for contextual understanding and document intelligence. Customize and optimize LLMs for specific domain applications. Build, maintain, and optimize ML pipelines and infrastructure for model training, evaluation, and deployment. Collaborate with engineering teams to integrate AI models into user-facing applications. Stay updated with the latest trends and research in Generative AI, open-source frameworks, and tools. Analyze model outputs for quality and performance, ensuring adherence to ethical AI Skills : Strong proficiency in Python and deep learning frameworks including TensorFlow, PyTorch, and HuggingFace Transformers. Deep understanding of GenAI architectures including LLMs, RAG, GANs, and Autoencoders. Experience with fine-tuning models using techniques such as LoRA, PEFT, or equivalents. Knowledge of vector databases (e.g., FAISS, Pinecone) and embedding generation methods. Experience handling datasets, preprocessing, and synthetic data generation. Solid grasp of NLP concepts, prompt engineering, and safe AI practices. Hands-on experience with API development, model deployment, and cloud platforms (AWS, GCP, Azure). (ref:hirist.tech)

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You are seeking a motivated and highly skilled Data Scientist to join the Flight Safety team in Gurgaon. Your role will involve leveraging advanced analytics and machine learning to drive safety-related decisions and enhance operational processes. As a critical member of the team, you will work with time-series and multivariate data to detect anomalies, identify patterns, and develop predictive models that enhance flight safety and risk mitigation. Your responsibilities will include performing exploratory data analysis to reveal insights from complex datasets, applying advanced anomaly detection techniques on time-series and multivariate data, and developing machine learning and deep learning models for predictive safety analytics. You will also be utilizing big data analytics tools and AI/ML frameworks to support data-driven decision-making in flight safety, designing custom algorithms for domain-specific requirements, and writing efficient code using Python and MATLAB for data processing and modeling. To qualify for this role, you should hold a Bachelor's degree in Engineering with a specialization in Data Science or a related field, along with a certification in Data Science. Additionally, you should have a minimum of 6 years of hands-on experience in data science, machine learning, or analytics roles, with a proven track record in developing AI/ML models, particularly in anomaly detection and predictive analytics. Key skills required for this position include data analysis, exploratory data analysis, anomaly detection on time-series and multivariate data, machine learning, deep learning, big data analytics, AI-driven risk forecasting, programming skills in Python and MATLAB, custom algorithm design, data collection, preprocessing, and integration. By joining this team, you will have the opportunity to contribute to cutting-edge data science initiatives in aviation safety, work in a collaborative and innovation-driven environment, and be part of a mission-critical team focused on predictive safety and operational excellence.,

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Position: Data Scientist Intern Company: Evoastra Ventures Pvt. Ltd. Location: Remote Duration: 1 month Stipend: Unpaid Type: Internship (Remote) Open To: Students, freshers, and early professionals About Evoastra Ventures Evoastra Ventures is a research-first data and AI solutions company focused on delivering value through predictive analytics, market intelligence, and technology consulting. Our goal is to empower businesses by transforming raw data into strategic decisions. As an intern, you will work with real-world datasets and gain industry exposure that accelerates your entry into the data science domain. Role Overview We are seeking highly motivated and analytical individuals for our Data Scientist Internship. This role is designed to give you hands-on exposure to real datasets, problem-solving tasks, and model development under the guidance of industry professionals. Responsibilities Perform data cleaning, preprocessing, and transformation Conduct exploratory data analysis (EDA) and identify trends Assist in the development and evaluation of machine learning models Contribute to reports and visual dashboards summarizing key insights Document workflows and collaborate with team members on project deliverables Participate in regular project check-ins and mentorship discussions Tools & Technologies Python, NumPy, Pandas, Scikit-learn Matplotlib, Seaborn Jupyter Notebook GitHub (for collaboration and version control) Power BI or Google Data Studio (optional but preferred) Eligibility Criteria Basic knowledge of Python, statistics, and machine learning concepts Good analytical and problem-solving skills Willingness to learn and adapt in a remote, team-based environment Strong communication and time-management skills Laptop with stable internet connection What You Will Gain Verified Internship Certificate Letter of Recommendation (based on performance) Real-time mentorship from professionals in data science and analytics Project-based learning and portfolio-ready outputs Priority consideration for future paid internships or full-time roles at Evoastra Recognition in our internship alumni community Application Process Submit your resume via our internship application form at www.evoastra.in Selected candidates will receive an onboarding email with further steps This internship is fully remote and unpaid Build your career foundation with hands-on data science projects that make an impact.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview: We are seeking an Embedded AI Software Engineer with deep expertise in writing software for resource-constrained edge hardware. This role is critical to building optimized pipelines that leverage media encoders/decoders, hardware accelerators, and AI inference runtimes on platforms like NVIDIA Jetson, Hailo, and other edge AI SoCs. You will be responsible for developing highly efficient, low-latency modules that run on embedded devices, involving deep integration with NVIDIA SDKs (Jetson Multimedia, DeepStream, TensorRT) and broader GStreamer pipelines. Key Responsibilities: Media Pipeline & AI Model Integration Implement hardware-accelerated video processing pipelines using GStreamer, V4L2, and custom media backends. Integrate AI inference engines using NVIDIA TensorRT, DeepStream SDK, or similar frameworks (ONNX Runtime, OpenVINO, etc.). Profile and optimize model loading, preprocessing, postprocessing, and buffer management for edge runtime. System-Level Optimization Design software within strict memory, compute, and power budgets specific to edge hardware. Utilize multimedia capabilities (ISP, NVENC/NVDEC) and leverage DMA, zero-copy mechanisms where applicable. Implement fallback logic and error handling for edge cases in live deployment conditions. Platform & Driver-Level Work Work closely with kernel modules, device drivers, and board support packages to tune performance. Collaborate with hardware and firmware teams to validate system integration. Contribute to device provisioning, model updates, and boot-up behavior for AI edge endpoints. Required Skills & Qualifications: Educational Background: Bachelor’s or Master’s degree in Computer Engineering, Electronics, Embedded Systems, or related fields. Professional Experience: 2–4 years of hands-on development for edge/embedded systems using C++ (mandatory). Demonstrated experience with NVIDIA Jetson or equivalent edge AI hardware platforms. Technical Proficiency: Proficient in C++11/14/17 and multi-threaded programming. Strong understanding of video codecs, media IO pipelines, and encoder/decoder frameworks. Experience with GStreamer, V4L2, and multimedia buffer handling. Familiarity with TensorRT, DeepStream, CUDA, and NVIDIA’s multimedia APIs. Exposure to other runtimes like HailoRT, OpenVINO, or Coral Edge TPU SDK is a plus. Bonus Points Familiarity with build systems (CMake, Bazel), cross-compilation, and Yocto. Understanding of AI model quantization, batching, and layer fusion for performance. Prior experience working with camera bring-up, video streaming, and inference on live feeds. Contact Information: To apply, please send your resume and portfolio details to hire@condor-ai.com with “Application: Embedded AI Software Engineer” in the subject line. About Condor AI: Condor is an AI engineering company where we use artificial intelligence models to deploy solutions in the real world. Our core strength lies in Edge AI, combining custom hardware with optimized software for fast, reliable, on device intelligence. We work across smart cities, industrial automation, logistics, and security, with a team that brings over a decade of experience in AI, embedded systems, and enterprise grade solutions. We operate lean, think globally, and build for production from system design to scaled deployment.

Posted 2 weeks ago

Apply

1.0 years

1 - 3 Lacs

Chandigarh

On-site

Internship Overview: We are seeking enthusiastic and driven Machine Learning Interns to join our dynamic team for a 3-month intensive internship program. This internship offers a unique opportunity to gain hands-on experience in real-world ML applications and contribute to innovative projects. During the initial training phase, no stipend will be provided. A monthly stipend of ₹30,000 will only be provided if you are shortlisted and assigned to a live project after the successful completion of the training period, based on your demonstrated progress and performance. Post-Internship Commitment: Upon being shortlisted for a live project and receiving the stipend, a minimum service period of 1 year with the company is required. Should you choose to exit the company before completing this 1-year service period, a penalty of ₹360,000 will be applicable. Key Responsibilities: Assist in collecting, cleaning, and preprocessing data for machine learning models. Support the development, training, and evaluation of various ML models. Conduct research on different machine learning algorithms and techniques. Collaborate with senior engineers and data scientists on ongoing projects. Help in deploying and monitoring machine learning models. Document code, experiments, and findings clearly and concisely. Actively participate in team meetings and learning sessions. Qualifications: Currently pursuing or recently completed a Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, Statistics, or a related technical field. Strong foundational understanding of machine learning concepts and algorithms. Proficiency in at least one programming language commonly used in ML (e.g., Python, R). Familiarity with relevant ML libraries/frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Basic understanding of data structures and algorithms. Ability to learn quickly and adapt to new technologies. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. What We Offer: A challenging and rewarding 3-month intensive training experience. Mentorship from experienced Machine Learning Engineers and Data Scientists. Exposure to real-world datasets and cutting-edge ML technologies. Opportunity to work on diverse projects that impact our business. Potential to be assigned to live projects with a monthly stipend of ₹30,000 upon successful training completion. A collaborative and supportive learning environment. Job Type: Internship Contract length: 15 months Pay: ₹10,000.00 - ₹30,635.66 per month Benefits: Flexible schedule Work Location: In person Expected Start Date: 01/08/2025

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies