Jobs
Interviews

1829 Mlflow Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job title: AI/ML Data Scientist Location: Hyderabad About The Job Transform healthcare through innovation. At Sanofi, we're not just developing treatments—we're pioneering the future of healthcare by harnessing the power of data insights and responsible AI to accelerate breakthrough therapies. As an AI/ML Scientist on our AI and Computational Sciences team, you'll: Drive innovation that directly impacts patient outcomes Collaborate with world-class scientists to solve complex healthcare challenges Apply advanced AI techniques to increase drug development success rates Shape the responsible use of AI in life-saving medical research Be part of a mission that matters. Help us transform data into life-changing treatments and join a team where your expertise can make a meaningful difference in patients' lives. Our Team The AI and Computational Sciences team is a key team within R&D Digital, focused on image, omics, wearable sensor data, and clinical data analytics. This team plays a critical role in bridging the gap between general purposed digital products and specific project needs. We are looking for a skilled AI/ML Data Scientist to join our elite AI and Computational Sciences team and harness cutting-edge AI to revolutionize healthcare. As a key player within R&D Digital, you'll transform complex data into life-changing medical breakthroughs. Impact You'll Make Drive Innovation Across Multiple High-impact Domains Precision Medicine: Develop patient response prediction models that personalize treatments Advanced Omics Analysis: Pioneer cell type and cell stage quantification techniques Advanced Image/Video Analysis: Lead application of state-of-art computer vision methods for gaining unprecedented insights about drug efficacy from medical images/videos Digital Health: Design novel biomarkers from wearable sensor data Biological Insights: Create enzyme property prediction algorithms and conduct disease pathway analyses Your Growth Journey Technical Mastery: Develop expertise across image analysis, time series modeling, GenAI, AI Agents, and explainable AI Scientific Impact: Publish in top-tier AI/ML journals and secure patents that protect groundbreaking innovations Global Influence: Deploy solutions that impact patients worldwide Your Environment Elite Team: Work alongside AI/ML experts and drug development experts in an agile, high-performance environment Cutting-Edge Resources: Access Sanofi's state-of-the-art cloud infrastructure and data platforms Continuous Learning: Receive mentorship and training opportunities to sharpen your leadership and AI/ML skills Join Our AI-First Vision Be part of Sanofi's bold transformation into an AI-first organization where you'll: Develop your skills through world-class mentorship and training Chase the miracles of science to improve people's lives Ready to transform healthcare through the power of AI? Main Responsibilities Research Phase Excellence Design and implement AI models for target identification and validation using multi-omics data (genomics, proteomics, transcriptomics) Develop predictive algorithms to molecular design for compound selection and accelerate lead optimization Create computer vision systems for high-throughput screening image analysis and cellular phenotyping Clinical Development Innovation Engineer digital biomarkers from wearable sensors and mobile devices to enable objective, continuous patient monitoring Implement advanced time-series analysis of real-time patient data to detect early efficacy signals Design AI-powered patient stratification models to identify responder populations and optimize trial design Multi-Modal Data Integration Architect systems that harmonize diverse data types (imaging, omics, clinical, text, sensor) into unified analytical frameworks Develop novel feature extraction techniques across modalities to enhance predictive power Create visualization tools that present complex multi-modal insights to clinical teams Scientific Impact Collaborate with cross-functional teams to translate AI insights into actionable drug development strategies Present findings to scientific and business stakeholders with varying technical backgrounds Publish innovative methodologies in top-tier scientific and AI/ML journals Contribute to patent applications to protect novel AI/ML approaches About You Experience : 3 to 5 years of experience in AI/ML and computational model development on multimodal data like omics, biomedical imaging, text and clinical trials data Key Functional Requirement Demonstrated track record of successful AI/ML project implementation 3-5 years of experience in computational modeling or AI/ML algorithm development, or any other related field Deep understanding and proven track record of developing model training pipelines and workflows Excellent communication and collaboration skills Working knowledge and comfort working with Agile methodologies Technical Skills Programming Proficiency: Advanced Python skills with experience in ML frameworks (PyTorch, TensorFlow, JAX) Machine Learning: Deep expertise in supervised, unsupervised, and reinforcement learning algorithms Drug discovery: molecular design, docking, binding site prediction, mRNA vaccine design, ADMET property, protein structure prediction, molecular dynamics simulation Deep Learning: Experience designing and implementing neural network architectures (CNNs, RNNs, Transformers) Computer Vision: Proficiency in image processing, segmentation, and object detection techniques (SAM, ViT, Diffusion Models, MediaPipe, MMPose, MonoDepth, VoxelNet, SlowFast, C3D) Natural Language Processing: Experience with large language models, text mining, and information extraction (OpenAI, Claude, Llama, Qwen, Deepseek model series) Time Series Analysis: Expertise in analyzing temporal data from sensors and wearable devices (HAR foundation models, compliance detection models) Omics Analysis: Knowledge of computational methods for protein genomics, proteomics, or transcriptomics data Cloud Computing: Experience deploying ML models on cloud platforms (AWS) Tools And Technologies Data Processing: Experience with data pipelines and ETL processes Version Control: Proficiency with Git and collaborative development workflows, Docker MLOps: Experience with model deployment, monitoring, and maintenance Visualization: Ability to create compelling data visualizations (Matplotlib, Seaborn, Plotly) Experiment Tracking: Familiarity with tools like MLflow, Weights & Biases, or similar platforms Soft Skills Strong scientific communication abilities for technical and non-technical audiences Collaborative mindset for cross-functional team environments Problem-solving approach with ability to translate business needs into technical solutions Self-motivated with capacity to work independently and drive projects forward Education : PhD/MS/BE/BTech/ME/MTech in Computer Science and Engineering, AI/ML, other relevant engineering discipline, Computational Biology, Data Science, Bioinformatics or related fields (with equivalent experience) Preferred : Publications or public github Languages : English Why Choose us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.

Posted 1 week ago

Apply

3.0 - 2.0 years

3 - 15 Lacs

Mohali, Punjab

On-site

Job Title: AI/ML Engineer Job Summary We are seeking a talented and passionate AI/ML Engineer with at least 3 years of experience to join our growing data science and machine learning team. The ideal candidate will have hands-on experience in building and deploying machine learning models, data preprocessing, and working with real-world datasets. You will collaborate with cross-functional teams to develop intelligent systems that drive business value. Key Responsibilities ● Design, develop, and deploy machine learning models for various business use cases. ● Analyze large and complex datasets to extract meaningful insights. ● Implement data preprocessing, feature engineering, and model evaluation pipelines. ● Work with product and engineering teams to integrate ML models into production environments. ● Conduct research to stay up to date with the latest ML and AI trends and technologies. ● Monitor and improve model performance over time. Required Qualifications ● Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. ● Minimum 3 years of hands-on experience in building and deploying machine learning models. ● Strong proficiency in Python and ML libraries such as scikit-learn, TensorFlow, PyTorch, and XGBoost. ● Experience with training, fine-tuning, and evaluating ML models in real-world applications. ● Proficiency in Large Language Models (LLMs) – including experience using or fine-tuning models like BERT, GPT, LLaMA, or open-source transformers. ● Experience with model deployment, serving ML models via REST APIs or microservices using frameworks like FastAPI, Flask, or TorchServe. ● Familiarity with model lifecycle management tools such as MLflow, Weights & Biases, or Kubeflow. ● Understanding of cloud-based ML infrastructure (AWS SageMaker, Google Vertex AI, Azure ML, etc.). ● Ability to work with large-scale datasets, perform feature engineering, and optimize model performance. ● Strong communication skills and the ability to work collaboratively in cross-functional teams. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,500,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Experience: Ai/ML: 2 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: MLOps Manager As the MLOps Lead, you will be responsible for leading the deployment and operationalization of AI/ML solutions. You will collaborate with data scientists, engineers, and IT teams to ensure that machine learning models are efficiently deployed, monitored, and maintained. Your role will be crucial in bridging the gap between data science and production, ensuring that our AI initiatives are scalable, reliable, and deliver business value. Responsibilities: • Lead the design, implementation, and management of MLOps pipelines to deploy machine learning models into production. • Work closely with data scientists to understand model requirements and ensure smooth integration and deployment. • Develop and maintain infrastructure for model training, validation, deployment, and monitoring. • Implement best practices for CI/CD pipelines in the context of AI/ML model development and deployment. • Ensure scalability, reliability, and performance of deployed models through monitoring and continuous optimization. • Leverage containerization and orchestration tools (e.g., Docker, Kubernetes) to manage model deployment environments. • Collaborate with IT and DevOps teams to ensure seamless integration of AI/ML solutions with existing systems. • Implement and enforce security best practices for AI/ML models and data pipelines. • Stay updated with the latest advancements in MLOps tools, frameworks, and methodologies. • Provide technical leadership and mentorship to junior team members. Requirements: • Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field. • Proven experience (8+ years) in MLOps , DevOps, or a related field, with a focus on deploying machine learning models. • Strong understanding of machine learning, deep learning, NLP, and generative AI techniques. • Proficiency with MLOps tools and frameworks such as MLflow, Kubeflow, TensorFlow Extended (TFX), or similar. • Experience with CI/CD tools such as Jenkins, GitLab CI, or CircleCI. • Proficiency in programming languages such as Python and familiarity with ML/DL frameworks like TensorFlow, PyTorch, and scikit-learn. • Experience with cloud platforms (AWS, GCP, Azure) and their AI/ML services. • Knowledge of containerization and orchestration tools (Docker, Kubernetes). • Strong understanding of version control systems (e.g., Git) and collaborative development workflows. • Excellent problem-solving skills and the ability to design robust, scalable MLOps solutions. • Strong communication skills, with the ability to collaborate effectively with cross-functional teams. • Experience in leading and mentoring technical teams. Preferred Qualifications: • Experience with big data technologies and tools such as Hadoop, Spark, and Kafka. • Familiarity with data visualization tools such as Tableau, Power BI, or similar.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

jaipur, rajasthan

On-site

As a Senior Data Engineer + AI, you will play a crucial role in designing and optimizing distributed data pipelines using PySpark, Apache Spark, and Databricks to cater to both analytics and AI workloads. Your expertise in PySpark, Apache Spark, and Databricks for batch and streaming data pipelines will be instrumental in contributing to high-impact programs with clients. Your strong SQL skills for data analysis, transformation, and modeling will enable you to drive data-driven decision-making and facilitate rapid insight generation. Your responsibilities will involve supporting RAG pipelines, embedding generation, and data pre-processing for LLM applications, as well as creating and maintaining interactive dashboards and BI reports using tools such as Power BI, Tableau, or Looker. You will collaborate with cross-functional teams, including AI scientists, analysts, and business teams, to ensure the successful delivery of use cases. In this role, you will need to have a solid understanding of data warehouse design, relational databases such as PostgreSQL, Snowflake, SQL Server, as well as data lakehouse architectures. Your familiarity with cloud services for data and AI, such as Azure, AWS, or GCP, will be essential for ensuring data pipeline monitoring, cost optimization, and scalability in cloud environments. Furthermore, your exposure to Generative AI, RAG, embedding models, and vector databases like FAISS, Pinecone, ChromaDB, as well as experience with Agentic AI frameworks such as LangChain, Haystack, CrewAI, will be beneficial. Your knowledge of MLflow, Delta Live Tables, or other Databricks-native AI tools, CI/CD, Git, Docker, and DevOps pipelines will also be advantageous in this role. If you have a background in consulting, enterprise analytics, or AI/ML product development, it will further enhance your ability to excel in this position. Your excellent problem-solving and collaboration skills, coupled with your ability to bridge engineering and business needs, will be key to your success as a Senior Data Engineer + AI.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category Engineering Experience Principal Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Principal Associate- Fullstack Engineering Job Description Generative AI Observability & Governance for ML Platform At Capital One India, we work in a fast paced and intellectually rigorous environment to solve fundamental business problems at scale. Using advanced analytics, data science and machine learning, we derive valuable insights about product and process design, consumer behavior, regulatory and credit risk, and more from large volumes of data, and use it to build cutting edge patentable products that drive the business forward. We’re looking for a Principal Associate, Full Stack to join the Machine Learning Experience (MLX) team! As a Capital One Principal Associate, Full Stack, you'll be part of a team focusing on observability and model governance automation for cutting edge generative AI use cases. You will work on building solutions to collect metadata, metrics and insights from the large scale genAI platform. And build intelligent and smart solutions to derive deep insights into platform's use-cases performance and compliance with industry standards. You will contribute to building a system to do this for Capital One models, accelerating the move from fully trained models to deployable model artifacts ready to be used to fuel business decisioning and build an observability platform to monitor the models and platform components. The MLX team is at the forefront of how Capital One builds and deploys well-managed ML models and features. We onboard and educate associates on the ML platforms and products that the whole company uses. We drive new innovation and research and we’re working to seamlessly infuse ML into the fabric of the company. The ML experience we're creating today is the foundation that enables each of our businesses to deliver next-generation ML-driven products and services for our customers. What You’ll Do: Lead the design and implementation of observability tools and dashboards that provide actionable insights into platform performance and health. Leverage Generative AI models and fine tune them to enhance observability capabilities, such as anomaly detection, predictive analytics, and troubleshooting copilot. Build and deploy well-managed core APIs and SDKs for observability of LLMs and proprietary Gen-AI Foundation Models including training, pre-training, fine-tuning and prompting. Work with model and platform teams to build systems that ingest large amounts of model and feature metadata and runtime metrics to build an observability platform and to make governance decisions to ensure ethical use, data integrity, and compliance with industry standards for Gen-AI. Partner with product and design teams to develop and integrate advanced observability tools tailored to Gen-AI. Collaborate as part of a cross-functional Agile team,data scientists, ML engineers, and other stakeholders to understand requirements and translate them into scalable and maintainable solutions. Bring research mindset, lead Proof of concept to showcase capabilities of large language models in the realm of observability and governance which enables practical production solutions for improving platform users productivity. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. At least 4 years of experience designing and building data intensive solutions using distributed computing with deep understanding of microservices architecture. At least 4 years of experience programming with Python, Go, or Java Proficiency in observability tools such as Prometheus, Grafana, ELK Stack, or similar, with a focus on adapting them for Gen AI systems. Excellent knowledge in Open Telemetry and priority experience in building SDKs and APIs. Hands-on experience with Generative AI models and their application in observability or related areas. Excellent knowledge in Open Telemetry and priority experience in building SDKs and APIs. At least 2 years of experience with cloud platforms like AWS, Azure, or GCP. Preferred Qualifications: At least 4 years of experience building, scaling, and optimizing ML systems At least 3 years of experience in MLOps either using open source tools like MLFlow or commercial tools At least 2 Experience in developing applications using Generative AI i.e open source or commercial LLMs, and some experience in latest open source libraries such as LangChain, haystack and vector databases like open search, chroma and FAISS. Preferred prior experience in leveraging open source libraries for observability such as langfuse, phoenix, openInference, helicone etc. Contributed to open source libraries specifically GEN-AI and ML solutions Authored/co-authored a paper on a ML technique, model, or proof of concept Preferred experience with an industry recognized ML framework such as scikit-learn, PyTorch, Dask, Spark, or TensorFlow. Prior experience in NVIDIA GPU Telemetry and experience in CUDA Knowledge of data governance and compliance, particularly in the context of machine learning and AI systems. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category Engineering Experience Principal Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Principal Associate- Fullstack Engineering Job Description Generative AI Observability & Governance for ML Platform At Capital One India, we work in a fast paced and intellectually rigorous environment to solve fundamental business problems at scale. Using advanced analytics, data science and machine learning, we derive valuable insights about product and process design, consumer behavior, regulatory and credit risk, and more from large volumes of data, and use it to build cutting edge patentable products that drive the business forward. We’re looking for a Principal Associate, Full Stack to join the Machine Learning Experience (MLX) team! As a Capital One Principal Associate, Full Stack, you'll be part of a team focusing on observability and model governance automation for cutting edge generative AI use cases. You will work on building solutions to collect metadata, metrics and insights from the large scale genAI platform. And build intelligent and smart solutions to derive deep insights into platform's use-cases performance and compliance with industry standards. You will contribute to building a system to do this for Capital One models, accelerating the move from fully trained models to deployable model artifacts ready to be used to fuel business decisioning and build an observability platform to monitor the models and platform components. The MLX team is at the forefront of how Capital One builds and deploys well-managed ML models and features. We onboard and educate associates on the ML platforms and products that the whole company uses. We drive new innovation and research and we’re working to seamlessly infuse ML into the fabric of the company. The ML experience we're creating today is the foundation that enables each of our businesses to deliver next-generation ML-driven products and services for our customers. What You’ll Do: Lead the design and implementation of observability tools and dashboards that provide actionable insights into platform performance and health. Leverage Generative AI models and fine tune them to enhance observability capabilities, such as anomaly detection, predictive analytics, and troubleshooting copilot. Build and deploy well-managed core APIs and SDKs for observability of LLMs and proprietary Gen-AI Foundation Models including training, pre-training, fine-tuning and prompting. Work with model and platform teams to build systems that ingest large amounts of model and feature metadata and runtime metrics to build an observability platform and to make governance decisions to ensure ethical use, data integrity, and compliance with industry standards for Gen-AI. Partner with product and design teams to develop and integrate advanced observability tools tailored to Gen-AI. Collaborate as part of a cross-functional Agile team,data scientists, ML engineers, and other stakeholders to understand requirements and translate them into scalable and maintainable solutions. Bring research mindset, lead Proof of concept to showcase capabilities of large language models in the realm of observability and governance which enables practical production solutions for improving platform users productivity. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. At least 4 years of experience designing and building data intensive solutions using distributed computing with deep understanding of microservices architecture. At least 4 years of experience programming with Python, Go, or Java Proficiency in observability tools such as Prometheus, Grafana, ELK Stack, or similar, with a focus on adapting them for Gen AI systems. Excellent knowledge in Open Telemetry and priority experience in building SDKs and APIs. Hands-on experience with Generative AI models and their application in observability or related areas. Excellent knowledge in Open Telemetry and priority experience in building SDKs and APIs. At least 2 years of experience with cloud platforms like AWS, Azure, or GCP. Preferred Qualifications: At least 4 years of experience building, scaling, and optimizing ML systems At least 3 years of experience in MLOps either using open source tools like MLFlow or commercial tools At least 2 Experience in developing applications using Generative AI i.e open source or commercial LLMs, and some experience in latest open source libraries such as LangChain, haystack and vector databases like open search, chroma and FAISS. Preferred prior experience in leveraging open source libraries for observability such as langfuse, phoenix, openInference, helicone etc. Contributed to open source libraries specifically GEN-AI and ML solutions Authored/co-authored a paper on a ML technique, model, or proof of concept Preferred experience with an industry recognized ML framework such as scikit-learn, PyTorch, Dask, Spark, or TensorFlow. Prior experience in NVIDIA GPU Telemetry and experience in CUDA Knowledge of data governance and compliance, particularly in the context of machine learning and AI systems. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

About Us Observe.AI is transforming customer service with AI agents that speak, think, and act like your best human agents—helping enterprises automate routine customer calls and workflows, support agents in real time, and uncover powerful insights from every interaction. With Observe.AI, businesses boost automation, deliver faster, more consistent 24/7 service and build stronger customer loyalty. Trusted by brands like Accolade, Prudential, Concentrix, Cox Automotive, and Included Health, Observe.AI is redefining how businesses connect with customers—driving better experiences and lasting relationships at every touchpoint. The Opportunity We are looking for a Senior Data Engineer with strong hands-on experience in building scalable data pipelines and real-time processing systems. You will be part of a high-impact team focused on modernizing our data architecture, enabling self-serve analytics, and delivering high-quality data products. This role is ideal for engineers who love solving complex data challenges, have a growth mindset, and are excited to work on both batch and streaming systems. What you’ll be doing: Build and maintain real-time and batch data pipelines using tools like Kafka, Spark, and Airflow. Contribute to the development of a scalable LakeHouse architecture using modern data formats such as Delta Lake, Hudi, or Iceberg. Optimize data ingestion and transformation workflows across cloud platforms (AWS, GCP, or Azure). Collaborate with Analytics and Product teams to deliver data models, marts, and dashboards that drive business insights. Support data quality, lineage, and observability using modern practices and tools. Participate in Agile processes (Sprint Planning, Reviews) and contribute to team knowledge sharing and documentation. Contribute to building data products for inbound (ingestion) and outbound (consumption) use cases across the organization. Who you are: 5-8 years of experience in data engineering or backend systems with a focus on large-scale data pipelines. Hands-on experience with streaming platforms (e.g., Kafka) and distributed processing tools (e.g., Spark or Flink). Working knowledge of LakeHouse formats (Delta/Hudi/Iceberg) and columnar storage like Parquet. Proficient in building pipelines on AWS, GCP, or Azure using managed services and cloud-native tools. Experience in Airflow or similar orchestration platforms. Strong in data modeling and optimizing data warehouses like Redshift, BigQuery, or Snowflake. Exposure to real-time OLAP tools like ClickHouse, Druid, or Pinot. Familiarity with observability tools such as Grafana, Prometheus, or Loki. Some experience integrating data with MLOps tools like MLflow, SageMaker, or Kubeflow. Ability to work with Agile practices using JIRA, Confluence, and participating in engineering ceremonies. Compensation, Benefits and Perks Excellent medical insurance options and free online doctor consultations Yearly privilege and sick leaves as per Karnataka S&E Act Generous holidays (National and Festive) recognition and parental leave policies Learning & Development fund to support your continuous learning journey and professional development Fun events to build culture across the organization Flexible benefit plans for tax exemptions (i.e. Meal card, PF, etc.) Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply.

Posted 1 week ago

Apply

14.0 - 18.0 years

0 Lacs

karnataka

On-site

The AVP Databricks Squad Delivery Lead position is open for candidates with 14+ years of experience in Bangalore/Hyderabad/NCR/Kolkata/Mumbai/Pune. As the Databricks Squad Delivery Lead, you will be responsible for overseeing project delivery, team leadership, architecture reviews, and client engagement. Your role will involve optimizing Databricks implementations across cloud platforms like AWS, Azure, and GCP, while leading cross-functional teams. You will lead and manage end-to-end delivery of Databricks-based solutions, serving as a subject matter expert in Databricks architecture, implementation, and optimization. Collaboration with architects and engineers to design scalable data pipelines and analytics platforms will be a key aspect of your responsibilities. Additionally, you will oversee Databricks workspace setup, performance tuning, and cost optimization, while acting as the primary point of contact for client stakeholders. Driving innovation through the implementation of best practices, tools, and technologies, and ensuring alignment between business goals and technical solutions will also be part of your duties. The ideal candidate for this role must possess a Bachelor's degree in Computer Science, Engineering, or equivalent (Masters or MBA preferred) along with hands-on experience in delivering data engineering/analytics projects using Databricks. Experience in managing cloud-based data pipelines on AWS, Azure, or GCP, strong leadership skills, and effective client-facing communication are essential requirements. Preferred skills include proficiency with Spark, Delta Lake, MLflow, and distributed computing, expertise in data engineering concepts such as ETL, data lakes, and data warehousing, and certifications in Databricks or cloud platforms (AWS/Azure/GCP) as a plus. An Agile/Scrum or PMP certification will be considered an added advantage for this role.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a high-impact AI/ML Engineer, you will lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You will be an integral part of a fast-paced, outcome-oriented AI & Analytics team, collaborating with data scientists, engineers, and product leaders to translate business use cases into real-time, scalable AI systems. Your responsibilities in this role will include architecting, developing, and deploying ML models for multimodal problems encompassing vision, audio, and NLP tasks. You will be responsible for the complete ML lifecycle, from data ingestion to model development, experimentation, evaluation, deployment, and monitoring. Leveraging transfer learning and self-supervised approaches where appropriate, you will design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborating with MLOps, data engineering, and DevOps teams, you will operationalize models using technologies such as Docker, Kubernetes, or serverless infrastructure. Continuously monitoring model performance and implementing retraining workflows to ensure sustained accuracy over time will be a key aspect of your role. You will stay informed about cutting-edge AI research and incorporate innovations such as generative AI, video understanding, and audio embeddings into production systems. Writing clean, well-documented, and reusable code to support agile experimentation and long-term platform development is an essential part of this position. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field, with a minimum of 5-8 years of experience in AI/ML Engineering, including at least 3 years in applied deep learning. In terms of technical skills, you should be proficient in Python, with knowledge of R or Java being a plus. Additionally, you should have expertise in ML/DL Frameworks like PyTorch, TensorFlow, and Scikit-learn, as well as experience in Computer Vision tasks such as image classification, object detection, OCR, segmentation, and tracking. Familiarity with Audio AI tasks like speech recognition, sound classification, and audio embedding models is also desirable. Strong capabilities in Data Engineering using tools like Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data are required. Knowledge of NLP/LLMs, Cloud & MLOps services, deployment & infrastructure technologies, and CI/CD & Version Control tools are also beneficial. Soft skills and competencies that will be valuable in this role include strong analytical and systems thinking, effective communication skills to convey models and results to non-technical stakeholders, the ability to work cross-functionally with various teams, and a demonstrated bias for action, rapid experimentation, and iterative delivery of impact.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

kolkata, west bengal

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With over 125,000 employees spanning across 30+ countries, we are deeply motivated by our curiosity, agility, and the desire to create enduring value for our clients. We are driven by our purpose - the relentless pursuit of a world that works better for people. We cater to and transform leading enterprises, including the Fortune Global 500, leveraging our profound business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Assistant Vice President, Databricks Squad Delivery Lead. As the Databricks Delivery Lead, you will be responsible for overseeing the complete delivery of Databricks-based solutions for our clients. Your role will involve ensuring the successful implementation, optimization, and scaling of big data and analytics solutions. You will play a crucial role in promoting the adoption of Databricks as the preferred platform for data engineering and analytics, while effectively managing a diverse team of data engineers and developers. Your key responsibilities will include: - Leading and managing Databricks-based project delivery, ensuring that all solutions adhere to client requirements, best practices, and industry standards. - Serving as the subject matter expert (SME) on Databricks, offering guidance to teams on architecture, implementation, and optimization. - Collaborating with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads. - Acting as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. - Maintaining effective communication with stakeholders, providing regular updates on project status, risks, and achievements. - Overseeing the setup, deployment, and optimization of Databricks workspaces, clusters, and pipelines. - Ensuring that Databricks solutions are optimized for cost and performance, utilizing best practices for data storage, processing, and querying. - Continuously evaluating the effectiveness of the Databricks platform and processes, and proposing improvements or new features to enhance delivery efficiency and effectiveness. - Driving innovation within the team by introducing new tools, technologies, and best practices to improve delivery quality. Qualifications we are looking for: Minimum Qualifications / Skills: - Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred). - Relevant years of experience in IT services with a specific focus on Databricks and cloud-based data engineering. Preferred Qualifications / Skills: - Demonstrated experience in leading end-to-end delivery of data engineering or analytics solutions on Databricks. - Strong expertise in cloud technologies (AWS, Azure, GCP), data pipelines, and big data tools. - Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies. - Proficiency in data engineering concepts, including ETL, data lakes, data warehousing, and distributed computing. Preferred Certifications: - Databricks Certified Associate or Professional. - Cloud certifications (AWS Certified Solutions Architect, Azure Data Engineer, or equivalent). - Certifications in data engineering, big data technologies, or project management (e.g., PMP, Scrum Master). If you are passionate about driving innovation, leading a high-performing team, and shaping the future of data engineering and analytics, we welcome you to apply for this exciting opportunity of Assistant Vice President, Databricks Squad Delivery Lead at Genpact.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

jaipur, rajasthan

On-site

We are searching for a skilled and adaptable Data Engineer with proficiency in PySpark, Apache Spark, and Databricks, combined with knowledge in analytics, data modeling, and Generative AI/Agentic AI solutions. This position suits individuals who excel at the convergence of data engineering, AI systems, and business insights, contributing to impactful programs with clients. Your responsibilities will include designing, constructing, and enhancing distributed data pipelines utilizing PySpark, Apache Spark, and Databricks to cater to both analytics and AI workloads. You will also be tasked with supporting RAG pipelines, embedding generation, and data pre-processing for LLM applications. Additionally, creating and maintaining interactive dashboards and BI reports using tools like Power BI, Tableau, or Looker for business stakeholders and consultants will be part of your role. Furthermore, your duties will involve conducting adhoc data analysis to facilitate data-driven decision-making and rapid insight generation. You will be expected to develop and sustain robust data warehouse schemas, star/snowflake models, and provide support for data lake architecture. Integration with and support for LLM agent frameworks like LangChain, LlamaIndex, Haystack, or CrewAI for intelligent workflow automation will also fall under your purview. In addition, ensuring data pipeline monitoring, cost optimization, and scalability in cloud environments (Azure/AWS/GCP) will be important aspects of your work. Collaboration with cross-functional teams, including AI scientists, analysts, and business teams to drive use-case delivery, is key. Lastly, maintaining robust data governance, lineage, and metadata management practices using tools such as Azure Purview or DataHub will also be part of your responsibilities.,

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Gurugram, Haryana, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Cuttack, Odisha, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Kolkata, West Bengal, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Guwahati, Assam, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Ahmedabad, Gujarat, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Ranchi, Jharkhand, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Amritsar, Punjab, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Manager , Data Scientist We are seeking a tenured and highly skilled Data Scientist with deep expertise in Computer Vision and a strong foundation in AI/ML modeling. The ideal candidate will not only lead the development of intelligent vision systems but will also serve as a technical mentor, providing guidance to junior data scientists on model selection, optimization, and deployment strategies. Experience in domains such as energy, power generation, industrial equipment, or manufacturing will be considered a strong advantage, as the role involves solving real-world visual AI /ML problems in industrial environments. Key Responsibilities: Lead CV Projects: Design and deliver Computer Vision models across a range of use cases (e.g., anomaly detection, visual inspections, OCR, predictive maintenance). Model Development: Develop, evaluate, and optimize state-of-the-art AI/ML models (e.g., CNNs, Vision Transformers, YOLO, Faster R-CNN, etc.). Mentorship: Guide junior and mid-level data scientists on best practices in feature engineering, model selection, evaluation metrics, and problem-solving strategies. Domain Translation: Translate complex industrial problems into AI-driven CV solutions that can scale in production environments. Collaboration: Work closely with software engineers, MLOps , and business teams to ensure model integration and operational success. Code Quality & Experimentation: Drive code modularity, reproducibility, and experimentation through use of ML pipelines, version control, and testing. Innovation & Research: Stay current with latest CV and AI/ML advancements and apply them appropriately to business problems. Stakeholder Communication: Present insights, models, and outcomes in a clear and impactful way to both technical and non-technical stakeholders. Required Qualifications: Master&rsquos or PhD in Computer Science, Machine Learning, AI, Electrical Engineering, or a related field. Sound experience in building and deploying machine learning models, with a strong portfolio in Computer Vision. Deep expertise in ML frameworks and CV libraries such as PyTorch , TensorFlow, OpenCV, Detectron2, MMDetection , etc. Solid understanding of core AI/ML algorithms - classification, regression, segmentation, object detection, time-series, clustering, etc. Experience with MLOps tools (e.g., MLflow , DVC, Kubeflow) and cloud platforms (AWS/GCP/Azure). Strong communication , leadership, and team collaboration skills . Preferred Qualifications: Prior experience in domains such as energy, utilities, power generation, or industrial systems is highly preferred. Experience deploying CV models within real-time environments. Contributions to open-source CV projects or published research. Why join Genpact Lead AI-first transformation - Build and scale AI solutions that redefine industries Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career &mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Jaipur, Rajasthan, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Greater Lucknow Area

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Thane, Maharashtra, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies