Jobs
Interviews

169 Kubeflow Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

karnataka

On-site

At PwC, we focus on leveraging data to drive insights and make informed business decisions in the field of data and analytics. Our team utilises advanced analytics techniques to help clients optimize their operations and achieve their strategic goals. As a Data Analyst at PwC, you will play a crucial role in utilizing advanced analytical techniques to extract insights from large datasets and facilitate data-driven decision-making. Your responsibilities will include leveraging skills in data manipulation, visualization, and statistical modeling to assist clients in solving complex business problems. We are currently seeking a highly skilled MLOps/LLMOps Engineer to join PwC US - Acceleration Center. In this role, you will be responsible for the deployment, scaling, and maintenance of Generative AI models. Working closely with data scientists, ML/GenAI engineers, and DevOps teams, you will ensure seamless integration and operation of GenAI models within production environments at PwC and our clients. The ideal candidate will have a strong background in MLOps practices, coupled with experience and interest in Generative AI technologies. As a candidate, you should have a minimum of 4+ years of hands-on experience. Core qualifications for this role include 3+ years of experience developing and deploying AI models in production environments, with a year of experience in developing proofs of concept and prototypes. Additionally, a strong background in software development, proficiency in programming languages like Python, knowledge of ML frameworks and libraries, familiarity with containerization and orchestration tools, and experience with cloud platforms and CI/CD tools are essential. Key responsibilities of the role involve developing and implementing MLOps strategies tailored for Generative AI models, designing and managing CI/CD pipelines specialized for ML workflows, monitoring and optimizing the performance of AI models in production, collaborating with data scientists and ML researchers, and ensuring compliance with data privacy regulations. You will also be responsible for troubleshooting and resolving issues related to ML model serving, data anomalies, and infrastructure performance. The successful candidate will be proficient in MLOps tools such as MLflow, Kubeflow, Airflow, or similar, have expertise in generative AI frameworks, containerization technologies, MLOps and LLMOps practices, and cloud-based AI services. Nice-to-have qualifications include experience with advanced GenAI applications, familiarity with experiment tracking tools, knowledge of high-performance computing techniques, and contributions to open-source MLOps or GenAI projects. In addition to technical skills, the role requires project delivery capabilities such as designing scalable deployment pipelines for ML/GenAI models, overseeing cloud infrastructure setup, and creating detailed documentation for deployment pipelines. Client engagement is another essential aspect, involving collaboration with clients to understand their business needs, presenting technical approaches and results, conducting training sessions, and creating user guides for clients. To stay ahead in the field, you will need to stay updated with the latest trends in MLOps/LLMOps and Generative AI, apply this knowledge to improve existing systems and processes, develop internal tools and frameworks, mentor junior team members, and contribute to technical publications. The ideal candidate for this position should hold any graduate/BE/B.Tech/MCA/M.Sc/M.E/M.Tech/Masters Degree/MBA. Join us at PwC and be part of a dynamic team driving innovation in data and analytics!,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Bengaluru, Karnataka, India

On-site

What You ll Do Design & Build: Develop mutli-agent AI systems for the UCaaS platform, focusing on NLP, speech recognition, audio intelligence and LLM powered interactions. Rapid Experiments: Prototype with open-weight models (Mistral, LLaMA, Whisper, etc.) and scale what works. Code for Excellence: Write robust code for AI/ML libraries and champion software best practices. Optimize for Scale & Cost: Engineer scalable AI pipelines, focusing on latency, throughput, and cloud costs. Innovate with LLMs: Fine-tune and deploy LLMs for summarization, sentiment and intent detection, RAG pipelines, multi-modal inputs and multi-agentic task automation. Own the Stack: Lead multi-agentic environments from data to deployment and scale. Collaborate & Lead: Integrate AI with cross-functional teams and mentor junior engineers. What You Bring Experience:6-10 yearsof professional experience, with a mandatory minimum of 2 years dedicated to a hands-on role in a real-world, production-level AI/ML project. Coding & Design: Expert-level programming skills inPythonand proficiency in designing and building scalable, distributed systems. ML/AI Expertise: Deep, hands-on experience with coreML/AI libraries and frameworks, Agentic Systems, RAG pipelines Hands-on experience in usingVector DBs LLM Proficiency: Proven experience working with and fine-tuning Large Language Models (LLMs). Scalability & Optimization Mindset: Demonstrated experience in building and scaling AI services in the cloud, with a strong focus on performance tuning and cost optimization of agents specifically. Nice to Have Youve tried outagent frameworkslike LangGraph, CrewAI, or AutoGen and can explain the pros and cons of autonomous vs. orchestrated agents. Experience with MLOps tools and platforms (e.g., Kubeflow, MLflow, Sagemaker). Real-time streaming AI experience token-level generation, WebRTC integration, or live transcription systems Contributions to open-source AI/ML projects or a strong public portfolio (GitHub, Kaggle).

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

haryana

On-site

We are looking for a highly skilled AI/ML Engineer with expertise in developing machine learning solutions, utilizing graph databases like Neo4j, and constructing scalable production systems. As our ideal candidate, you should have a strong passion for applying artificial intelligence, machine learning, and data science techniques to solve real-world problems. You should also possess experience in dealing with complex rules, logic, and reasoning systems. Your responsibilities will include designing, developing, and deploying machine learning models and algorithms for production environments, ensuring their scalability and robustness. You will be expected to utilize graph databases, particularly Neo4j, to model, query, and analyze data relationships in large-scale connected data systems. Building and optimizing ML pipelines to ensure they are production-ready and capable of handling real-time data volumes will be a crucial aspect of your role. In addition, you will develop rule-based systems and collaborate with data scientists, software engineers, and product teams to integrate ML solutions into existing products and platforms. Implementing algorithms for entity resolution, recommendation engines, fraud detection, or other graph-related tasks using graph-based ML techniques will also be part of your responsibilities. You will work with large datasets, perform exploratory data analysis, feature engineering, and model evaluation. Post-deployment, you will monitor, test, and iterate on ML models to ensure continuous improvement in model performance and adaptability. Furthermore, you will participate in architectural decisions to ensure the efficient use of graph databases and ML models, while staying up-to-date with the latest advancements in AI/ML research, particularly in graph-based machine learning, reasoning systems, and logical AI. Requirements: - Bachelor's or Master's degree in Computer Science, Machine Learning, Artificial Intelligence, or a related field. - 1+ years of experience in AI/ML engineering or a related field, with hands-on experience in building and deploying ML models in production environments and self-made projects using graph databases. - Proficiency in Neo4j or other graph databases, with a deep understanding of Cypher query language and graph theory concepts. - Strong programming skills in Python, Java, or Scala, along with experience using ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). - Experience with machine learning pipelines and tools like Airflow, Kubeflow, or MLflow for model tracking and deployment. - Hands-on experience with rule engines or logic programming systems (e.g., Drools, Prolog). - Experience with cloud platforms such as AWS, GCP, or Azure for ML deployments. - Familiarity with containerization and orchestration technologies like Docker and Kubernetes. - Experience working with large datasets, SQL/NoSQL databases, and handling data preprocessing at scale.,

Posted 3 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Chennai

Work from Office

Job Summary We are seeking a strategic and innovative Senior Data Scientist to join our high-performing Data Science team. In this role, you will lead the design, development, and deployment of advanced analytics and machine learning solutions that directly impact business outcomes. You will collaborate cross-functionally with product, engineering, and business teams to translate complex data into actionable insights and data products. Key Responsibilities Lead and execute end-to-end data science projects, encompassing problem definition, data exploration, model creation, assessment, and deployment. Develop and deploy predictive models, optimization techniques, and statistical analyses to address tangible business needs. Articulate complex findings through clear and persuasive storytelling for both technical experts and non-technical stakeholders. Spearhead experimentation methodologies, such as A/B testing, to enhance product features and overall business outcomes. Partner with data engineering teams to establish dependable and scalable data infrastructure and production-ready models. Guide and mentor junior data scientists, while also fostering team best practices and contributing to research endeavors. Required Qualifications & Skills: Masters or PhD in Computer Science, Statistics, Mathematics, or a related 5+ years of practical experience in data science, including deploying models to Expertise in Python and SQL; Solid background in ML frameworks such as scikit-learn, TensorFlow, PyTorch, and Competence in data visualization tools like Tableau, Power BI, matplotlib, and Comprehensive knowledge of statistics, machine learning principles, and experimental Experience with cloud platforms (AWS, GCP, or Azure) and Git for version Exposure to MLOps tools and methodologies (e.g., MLflow, Kubeflow, Docker, CI/CD). Familiarity with NLP, time series forecasting, or recommendation systems is a Knowledge of big data technologies (Spark, Hive, Presto) is desirable Timings:1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri)

Posted 3 weeks ago

Apply

4.0 - 9.0 years

0 - 1 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Hi Pleasae Find JD and send me your updated Resume Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or a related field. 3+ years of experience in MLOps, DevOps, or ML Engineering roles. Strong experience with containerization (Docker) and orchestration (Kubernetes). Proficiency in Python and experience working with ML libraries like TensorFlow, PyTorch, or scikit-learn. Familiarity with ML pipeline tools such as MLflow, Kubeflow, TFX, Airflow, or SageMaker Pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code tools (Terraform, CloudFormation). Solid understanding of CI/CD principles, especially as applied to machine learning workflows. Nice-to-Have Experience with feature stores, model registries, and metadata tracking. Familiarity with data versioning tools like DVC or LakeFS. Exposure to data observability and monitoring tools. Knowledge of responsible AI practices including fairness, bias detection, and explainability.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

About Logik Are you driven to innovate Are you energized by the excitement of building a high-growth startup with winning technology and proven product-market fit Are you looking to join a team of A-players who keep customers first and take their work but not themselves seriously Logik was founded in 2021 by the godfathers of CPQ our CEO Christopher Shutts and our Executive Chairman Godard Abel, who together co-founded BigMachines, the first-ever CPQ technology vendor, in the early 2000s. Today, were reimagining what CPQ can and should be with our composable, AI-enabled platform that provides advanced configuration, transaction management, guided selling, and more. Were a well-funded and fast-growing startup disrupting the CPQ space, with founders that created the category and a platform thats pushing boundaries in configure-price-quote and complex commerce. We're looking for an exceptional AI Backend Engineer to join our Bangalore team and help us build the next generation of AI-powered solutions. Position Summary: As an Senior Backend Engineer AI & ML Specialization Engineer, you will play a crucial role in designing and developing scalable, high-performance backend systems that support our AI models and data pipelines. You will work closely with data scientists, machine learning engineers, and other backend developers to ensure our platform delivers reliable, real-time insights and predictions. Key Responsibilities: Design and develop robust, scalable backend services and APIs that handle large volumes of data and traffic. Implement data ingestion and processing pipelines to efficiently collect, store, and transform data for AI models. Develop and maintain efficient data storage solutions, including databases and data warehouses. Optimize backend systems for performance, scalability, and security. Collaborate with data scientists and machine learning engineers to integrate AI models into backend infrastructure. Collaborate with Devops to implement ML Ops and integrate the models and data engineering pipelines into highly available and reliable tech stacks. Troubleshoot and resolve technical issues related to backend systems and data pipelines. Stay up-to-date with the latest advancements in backend technologies and AI. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 6+ years of experience in backend development, with a focus on machine learning. Strong proficiency in Python and experience with popular frameworks such as Flask, Django, or FastAPI. Experience with SQL and NoSQL databases such as PostgreSQL, MySQL, MongoDB, or Redis. Experience with cloud platforms such as AWS, Azure, or GCP. Knowledge of date engineering, data pipelines and data processing frameworks such as Apache Airflow, Apache Spark, or Dask. Knowledge of ML Ops frameworks such as Kubeflow and experience with containerisation technologies such as Docker and Kubernetes. Knowledge of distributed computing and parallel programming. Excellent communication and problem-solving skills. Ability to work independently and as part of a team. Preferred Skills: Understanding of AI concepts and machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus. 3 + Years of experience with Java or Go is a plus. Experience with real-time data processing and streaming technologies. What We Offer: Competitive salary and benefits package. Opportunity to work on cutting-edge AI projects. Collaborative and supportive work environment. Continuous learning and professional development opportunities.,

Posted 3 weeks ago

Apply

4.0 - 10.0 years

3 - 6 Lacs

Thiruvananthapuram, Kerala, India

On-site

Lead is responsible to oversee the migration and modernization of the customer s platform and database services from Google Cloud Platform (GCP) to Amazon Web Services (AWS) Lead should provide technical guidance and mentorship to the data engineering team Design a robust AWS architecture for the database services, utilizing services such as Amazon RDS, Amazon Redshift, Amazon Aurora, Amazon Athena and Amazon DynamoDB Implement migration strategies and modernization processes, ensuring compatibility, performance, and adherence to best practices Collaborate closely with application and infrastructure teams to facilitate seamless integration of database services Identifies and implements opportunities for optimization and performance improvements during the migration Ensure compliance with data security, governance, and regulatory requirements during the migration Must have skills: Proven experience in cloud migration and modernization projects, with a strong understanding of AWS and GCP services. Strong understanding of cloud architecture principles and best practices Experience with data migration tools and strategies for AWS Understanding of AWS services such as S3, Redshift, RDS, and Data Pipeline. Skills in designing data models for both GCP and AWS environments. Proficiency in ETL (Extract, Transform, Load) processes and tools, including AWS Glue and Apache Beam. Knowledge of both SQL and NoSQL databases Proficient in scripting and automation tools using Python, Bash, or similar languages to automate data transfer and to streamline migration processes Strong ability to work effectively with cross-functional teams Proficient in documenting migration processes and providing training/support to stakeholders Experience in project planning, including defining timelines, milestones, and deliverables Skilled in scoping, planning, and executing data-related projects while adhering to timelines and budgets Ability to convey data requirements and migration benefits to non-technical stakeholders Strong documentation skills for creating architectural documents and communicating complex concepts to non-technical stakeholders Certifications : For Lead Architects at least 2 Professional or Specialty level AWS certification For Associate Architects at least 1 Professional or Specialty level AWS Certification For Senior Data Engineers at least 1 Associate level AWS certification Good to have skills: Certifications: Relevant AWS Certifications (e. g. , AWS Certified Solutions Architect) are advantageous Knowledge of big data technologies like Hadoop, Spark and Kafka for handling large datasets

Posted 3 weeks ago

Apply

7.0 - 12.0 years

3 - 6 Lacs

Bengaluru, Karnataka, India

On-site

As a Senior Client Solutions Partner you will be a part of the core sales and GTM team of Quantiphi and you will be responsible for execution of end-to-end sales processes in a B2B environment. Prepare and deliver technical presentations explaining products or services to customers and prospective customers Managing customer communication & relationships. Engage & drive end-to-end pre-sales activities for business development for the company in the Data analytics. Ability to identify & prospect full range - Proficient in developing business collaterals based on latest developments in Data modernization to showcase the potential of data for the enterprise. Experience in handling or being hands-on for data modernization projects. Work in conjunction with the Solution Architects & Data Engineering teams together, analyze and prospect business problems to be solved using large volumes of quantitative and qualitative data and develop of point of view to build a solution for the problems. Applying the right analysis frameworks to develop creative solutions to complex business problems. Planning and executing both short-term and long-range projects and manage teamwork, and client expectations. Challenge and inspire customers and peers to solve difficult problems with ambitious and novel solutions. Work with the team to identify and qualify business opportunities. Identify key customer technical objections and develop a strategy to resolve technical blockers. Work hands-on with customers to demonstrate and prototype Google Cloud product integrations in customer/partner environments and manage the technical relationship with Google s customers. Recommend integration strategies, enterprise architectures, platforms and application infrastructure required to successfully implement a complete solution using best practices on Google Cloud. This includes understanding, analyzing and prospecting complex business problems to be solved using Data solutions & AI Applications in a variety of industries including Healthcare, Media, BFSI, CPG, Retail, and many others. Travel to customer sites, conferences, and other related events as required. Responsibilities: You would be involved in the development of new business opportunities and value-added services which requires a high level of creativity, learning potential and deep quantitative subject matter expertise and therefore self-driven individuals willing to learn on the go would be preferred. Strong team player. Degree in Business (MBA), Computer Science Engineering. Good communication , abstraction , analytical and presentation skills. Experience of B2B sales , customer communication and relationship mangement Experience and knowledge of critical phases of the sales process which includes requirement gathering, sales planning, solution scoping, proposal writing and presentation. Data driven mindset. Your plans and actions are backed by not just gut feeling but also customer/industry/market research. Knowledge and willingness to learn and apply emerging trends in business research, data engineering, Cloud. Excellent aptitude in business analysis and awareness of quantitative analysis techniques. Excellent communication (both written & verbal) & articulation skills (Mandatory). Strong team player and ability to collaborate with a cross functional team. Experience with sales reporting. Self-driven and strong aptitude to work in an entrepreneurial , fast-paced environment with minimal supervision and a passion for developing new value-added data based solutions for clients across a variety of industries.

Posted 3 weeks ago

Apply

6.0 - 8.0 years

10 - 16 Lacs

Pune

Hybrid

We are hiring for AI/GL engineer with 6-8 years of experience. Job Location - Pune Immediate joiners - 30 days AI/ML Engineer Job Purpose: To leverage expertise in machine learning operations (ML Ops) to develop and maintain robust, scalable, and efficient ML infrastructure, enhancing a financial crime product and ensuring the seamless deployment and monitoring of ML models. Job Description: As an ML Ops Engineer - Senior Consultant, you will play a crucial role in a project aimed at overhauling and enhancing a financial crime product. This includes upgrading back-end components, data structures, and reporting capabilities. You will be responsible for developing and maintaining the ML infrastructure, ensuring the seamless deployment, monitoring, and management of ML models. This role involves collaborating with cross-functional teams to gather requirements, ensuring data accuracy, and providing actionable insights to support strategic decision-making. Key Responsibilities: • Develop and maintain robust, scalable, and efficient ML infrastructure. • Ensure seamless deployment, monitoring, and management of ML models. • Collaborate with cross-functional teams to gather and analyze requirements. • Ensure data accuracy and integrity across all ML Ops solutions. • Provide actionable insights to support strategic decision-making. • Contribute to the overhaul and enhancement of back-end components, data structures, and reporting capabilities. • Support compliance and regulatory reporting needs. • Troubleshoot and resolve ML Ops-related issues in a timely manner. • Stay updated with the latest trends and best practices in ML Ops. • Mentor junior team members and provide guidance on best practices in ML Ops Skills Required: • Proficiency in ML Ops tools and frameworks (e.g., MLflow, Kubeflow, TensorFlow Extended) • Strong programming skills (e.g., Python, Bash) • Experience with CI/CD pipelines and automation tools (e.g., Jenkins, GitLab CI) • Excellent analytical and problem-solving abilities • Effective communication and collaboration skills • Attention to detail and commitment to data accuracy Skills Desired: • Experience with cloud platforms (e.g., AWS, Azure, GCP) • Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes) • Familiarity with big data technologies (e.g., Hadoop, Spark) • Ability to work in an agile development environment • Experience in the financial crime domain

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mohali

Hybrid

We are seeking a forward-thinking AI Architect to design, lead, and scale enterprise-grade AI systems and solutions across domains. This role demands deep expertise in machine learning, generative AI, data engineering, cloud-native architecture, and orchestration frameworks. You will collaborate with cross-functional teams to translate business requirements into intelligent, production-ready AI solutions. Key Responsibilities: Architecture & Strategy Design end-to-end AI architectures that include data pipelines, model development, MLOps, and inference serving. Create scalable, reusable, and modular AI components for different use cases (vision, NLP, time series, etc.). Drive architecture decisions across AI solutions, including multi-modal models, LLMs, and agentic workflows. Ensure interoperability of AI systems across cloud (AWS/GCP/Azure), edge, and hybrid environments. Technical Leadership Guide teams in selecting appropriate models (traditional ML, deep learning, transformers, etc.) and technologies. Lead architectural reviews and ensure compliance with security, performance, and governance policies. Mentor engineering and data science teams in best practices for AI/ML, GenAI, and MLOps. Model Lifecycle & Engineering Oversee implementation of model lifecycle using CI/CD for ML (MLOps) and/or LLMOps workflows. Define architecture for Retrieval Augmented Generation (RAG), vector databases, embeddings, prompt engineering, etc. Design pipelines for fine-tuning, evaluation, monitoring, and retraining of models. Data & Infrastructure Collaborate with data engineers to ensure data quality, feature pipelines, and scalable data stores. Architect systems for synthetic data generation, augmentation, and real-time streaming inputs. Define solutions leveraging data lakes, data warehouses, and graph databases. Client Engagement / Product Integration Interface with business/product stakeholders to align AI strategy with KPIs. Collaborate with DevOps teams to integrate models into products via APIs/microservices. Required Skills & Experience: Core Skills Strong foundation in AI/ML/DL (Scikit-learn, TensorFlow, PyTorch, Transformers, Langchain, etc.) Advanced knowledge of Generative AI (LLMs, diffusion models, multimodal models, etc.) Proficiency in cloud-native architectures (AWS/GCP/Azure) and containerization (Docker, Kubernetes) Experience with orchestration frameworks (Airflow, Ray, LangGraph, or similar) Familiarity with vector databases (Weaviate, Pinecone, FAISS), LLMOps platforms, and RAG design Architecture & Programming Solid experience in architectural patterns (microservices, event-driven, serverless) Proficient in Python and optionally Java/Go Knowledge of APIs (REST, GraphQL), streaming (Kafka), and observability tooling (Prometheus, ELK, Grafana) Tools & Platforms ML lifecycle tools: MLflow, Kubeflow, Vertex AI, Sagemaker, Hugging Face, etc. Prompt orchestration tools: LangChain, CrewAI, Semantic Kernel, DSPy (nice to have) Knowledge of security, privacy, and compliance (GDPR, SOC2, HIPAA, etc.)

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

, India

On-site

We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor's/Master's in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Career Level - IC4 We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor's/Master's in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Diversity & Inclusion: An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. . Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles. to perform crucial job functions. That's why we're committed to creating a workforce where all individuals can do their best work. It's when everyone's voice is heard and valued that we're inspired to go beyond what's been done before.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

Bengaluru

Hybrid

The Opportunity Are you passionate about building intelligent, enterprise-grade AI systems using Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and agentic frameworks? At Nutanix, we're looking for a skilled and experienced AI/ML engineer to help shape the future of generative AI within our SaaS Engineering organization. As a senior member of our team, youll work at the cutting edge of AI innovationdeveloping and deploying state-of-the-art LLMs and embedding models, optimizing model performance, and building scalable ML pipelines with real-world impact. About the Team At Nutanix, you will be joining a dynamic central platform team that plays a pivotal role in revolutionizing our approach to artificial intelligence and machine learning within the SaaS Engineering group. Comprising eight experienced engineers, our team specializes in addressing the GenAI, machine learning, and data science needs of various squads within the organization. Our diverse skill set ensures we collaborate effectively to create innovative solutions, leveraging the latest advancements in technology to drive our initiatives forward. Your Role Design and deploy Retrieval-Augmented Generation (RAG) pipelines . Build, fine-tune, and deploy LLMs and embedding models such as LLaMA 3 , Gemma , Mistral , and other domain-specific transformers. Fine-tune both LLMs and embedding models for specialized enterprise tasks including Q&A, summarization, classification, and conversational AI. Develop and maintain agentic frameworks capable of orchestrating task-specific intelligent agents with memory, planning, and tool-use capabilities. Build and evaluate custom agents for use cases like document analysis, data querying, and interactive user support. Implement evaluation frameworks for LLM outputs, including both automated metrics and task-specific success criteria. Work closely with data engineering teams to develop custom training pipelines and extract meaningful insights from large-scale internal datasets. Develop MLOps pipelines for training, deployment, and monitoring using tools like MLflow , Kubeflow , and custom CI/CD workflows. Deploy optimized inference endpoints for high-performance, low-latency model serving at scale. Manage vectorization workflows using advanced embedding models and vector databases for semantic search and content retrieval. Demonstrate working knowledge of LangChain, OpenAI function-calling, vector databases and scalable retrieval logic. Work with Kubernetes clusters to provision, scale, and monitor AI/ML workloads; understand GPU, CPU, and storage hardware requirements for efficient deployment. Collaborate with cross-functional teams including backend, data, and infrastructure engineers to integrate models seamlessly into production systems. What You Will Bring Bachelors, Masters, or Ph.D. in Computer Science, Machine Learning, Applied Math, or a related field. 5+ years of hands-on experience building, deploying, and maintaining AI/ML systems in production environments. Strong foundation in MLOps, including model versioning, CI/CD, monitoring, and retraining workflows. In-depth understanding of Kubernetes (K8s) and GPU-based infrastructure, including container orchestration and GPU scheduling for AI workloads. Experience working with Elasticsearch for semantic search and integrating it within RAG or LLM-driven architectures. Proficient in Python (core ML libraries like PyTorch, Pandas, and NumPy). Hands-on experience using Jupyter Notebooks for experimentation, documentation, and collaboration. Comfortable with Unix-based systems, shell scripting, and command-line tooling for ML operations and debugging. Familiarity with LangChain, LLM orchestration, and vector database integration. Strong collaboration and communication skills, with the ability to mentor junior team members and drive initiatives independently. Open-source contributions or published work in the ML/AI domain is a plus.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

15 - 27 Lacs

Gurugram

Work from Office

Role & responsibilities Proven experience with MLOps practices and tools (Kubeflow, Kubernetes, LakeFS). Must have experience of the current AI landscape and its applications. Proficiency in Python and familiarity with containerization tools like Docker . Expertise on building Opensource Kubernetes from scratch and that too in an Air Gapped HSDN environment Excellent problem-solving skills and ability to work in a collaborative team environment. Preferred candidate profile Experience of 6 yrs and above. Expertise on building Opensource Kubernetes from scratch and that too in an Air Gapped HSDN environment Please share your updated resume at mantasha@bharatjobs.com along with below details: Total exp: Rel exp: C.CTC: E.CTC: NP: Current location: DOB:

Posted 4 weeks ago

Apply

7.0 - 12.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking a Senior Lead Business Execution Consultant. Shared Services Operations - Operational Excellence team is seeking an Applied Data Scientist with a strong data science background to deliver traditional AI and Generative AI solutions for driving operational efficiencies, improve internal and external customer experiences with the use of AI. The candidate needs to possess a mix of technical expertise, creative problem-solving skills, and the ability to align AI solutions with business needs. In this role, you will: Lead complex initiatives including creation, implementation, documentation, validation, articulation, and defense of highly advanced AI and Generative AI solutions. Deliver solutions for short and long-term objectives and provide analytical support for a wide array of business initiatives. Utilize neural network architectures, including transformer-based models such as GPT, BERT, and others for driving operational efficiencies. Familiarity with using diffusion models or multimodal AI depending on the use cases. Hands-on experience with fine-tuning, training, and deploying large language models (LLMs) in on-premises, cloud or hybrid environments. Strong hold on tokenization, embeddings, and model evaluation metrics. Present results of analysis, solution recommendations and AI-driven strategies for variety of business initiatives. Required Qualifications: 7+ years of Business Execution, Implementation, or Strategic Planning experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 7+ years of Data Science experience, or equivalent demonstrated through one or a combination of the following: work experience, training, research and education. Bachelors/masters degree in a discipline such as Data Science, Computer Science, statistics, or mathematics. Expertise in Python and key/major frameworks like TensorFlow, PyTorch, and HuggingFace. Experience in using ML Ops tools (e.g., MLflow, Kubeflow) for scaling and deploying models in production. Experience working with on Google Cloud Platform and expertise in using the GCP services. Expertise in Scala for processing large-scale data sets for use cases. Proficiency in Java, JavaScript (Node.js) for back-end integrations and for building interactive AI-based web applications or APIs. Experience with SQL and NoSQL languages for managing structured, unstructured, and semi-structured data for AI and Generative AI applications. Critical thinking and strong problem-solving skills. Ability to learn the latest technologies and keep up with the trends in the Gen-AI space and apply to the business problems quickly. Ability to multi-task and prioritize between projects and able to work independently and as part of a team. Graduate degree from a top tier university (e.g., IIT, ISI, IIIT, IIM, etc.,) is preferred. Job Expectations: Required to work individually or as part of a team on multiple AI and Generative AI projects and work closely with business partners across the organization. Mentor and coach budding Data Scientists on developing and implementing AI solutions. Perform various complex activities related to neural networks and transformer-based models. Provide analytical support for developing, evaluating, implementing, monitoring, and executing models across business verticals using emerging technologies. Expert knowledge on working on large datasets using SQL or NoSQL and present conclusions to key stakeholders. Establish a consistent and collaborative framework with the business and act as a primary point of contact in delivering the solutions. Experience in building quick prototypes to check feasibility and value to business. Expert in developing and maintaining modular codebase for reusability. Review and validate models and help improve the performance of the model under the preview of regulatory requirements. Work closely with technology teams to deploy the models to production. Prepare detailed documentations for projects for both internal and external use that comply with regulatory and internal audit requirements.

Posted 1 month ago

Apply

10.0 - 15.0 years

35 - 45 Lacs

Bengaluru

Work from Office

Title: AI/ML Architect Location: Onsite Bangalore Experience: 10+ years Position Summary: We are seeking an experienced AI/ML Architect to lead the design and deployment of scalable AI solutions. This role requires a strong blend of technical depth, systems thinking, and leadership in machine learning , computer vision , and real-time analytics . You will drive the architecture for edge, on-prem, and cloud-based AI systems, integrating 3rd party data sources, sensor and vision data to enable predictive, prescriptive, and autonomous operations across industrial environments. Key Responsibilities: Architecture & Strategy Define the end-to-end architecture for AI/ML systems including time series forecasting , computer vision , and real-time classification . Design scalable ML pipelines (training, validation, deployment, retraining) using MLOps best practices. Architect hybrid deployment models supporting both cloud and edge inference for low-latency processing. Model Integration Guide the integration of ML models into the IIoT platform for real-time insights, alerting, and decision support. Support model fusion strategies combining disparate data sources, sensor streams with visual data (e.g., object detection + telemetry + 3rd party data ingestion). MLOps & Engineering Define and implement ML lifecycle tooling, including version control, CI/CD, experiment tracking, and drift detection. Ensure compliance, security, and auditability of deployed ML models. Collaboration & Leadership Collaborate with Data Scientists, ML Engineers, DevOps, Platform, and Product teams to align AI efforts with business goals. Mentor engineering and data teams in AI system design, optimization, and deployment strategies. Stay ahead of AI research and industrial best practices; evaluate and recommend emerging technologies (e.g., LLMs, vision transformers, foundation models). Must-Have Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Engineering, or a related technical field. 8+ years of experience in AI/ML development, with 3+ years in architecting AI solutions at scale. Deep understanding of ML frameworks (TensorFlow, PyTorch), time series modeling, and computer vision. Proven experience with object detection, facial recognition, intrusion detection , and anomaly detection in video or sensor environments. Experience in MLOps (MLflow, TFX, Kubeflow, SageMaker, etc.) and model deployment on Kubernetes/Docker . Proficiency in edge AI (Jetson, Coral TPU, OpenVINO) and cloud platforms (AWS, Azure, GCP). Nice-to-Have Skills: Knowledge of stream processing (Kafka, Spark Streaming, Flink). Familiarity with OT systems and IIoT protocols (MQTT, OPC-UA). Understanding of regulatory and safety compliance in AI/vision for industrial settings. Experience with charts, dashboards, and integrating AI with front-end systems (e.g., alerts, maps, command center UIs). Role Impact: As AI/ML Architect, you will shape the intelligence layer of our IIoT platform — enabling smarter, safer, and more efficient industrial operations through AI. You will bridge research and real-world impact , ensuring our AI stack is scalable, explainable, and production-grade from day one.

Posted 1 month ago

Apply

6.0 - 8.0 years

25 - 40 Lacs

Gurugram

Work from Office

About this role: Lead Software Engineer (AI) position having experience in classic and generative AI techniques, and responsible for design, implementation, and support of Python based applications to help fulfill our Research & Consulting Delivery strategy. What youll do: Deliver client engagements that use AI rapidly, on the order of a few weeks Stay on top of current tools, techniques, and frameworks to be able to use and advise clients on them Build proofs of concept rapidly, to learn and adapt to changing market needs Support building internal applications for use by associates to improve productivity What you’ll need: 6-8 years of experience in classic AI techniques and at least 1.5 years in generative AI techniques. Demonstrated ability to run short development cycles and solid grasp of building software in a collaborative team setting. Must have: Experience building applications for knowledge search and summarization, frameworks to evaluate and compare performance of different GenAI techniques, measuring and improving accuracy and helpfulness of generative responses, implementing observability. Experience with agentic AI frameworks, RAG, embedding models, vector DBs Experience working with Python libraries like Pandas, Scikit-Learn, Numpy, and Scipy is required. Experience deploying applications to cloud platforms such as Azure and AWS. Solid grasp of building software in a collaborative team setting - use of agile scrum and tools like Jira / GitHub. Nice to have: Experience in finetuning Language models. Familiarity with AWS Bedrock / Azure AI / Databricks Services. Experience in Machine learning models and techniques like NLP, BERT, Transformers, Deep learning. Experience in MLOps Frameworks like Kubeflow, MLFlow, DataRobot, Airflow etc., Experience building scalable data models and performing complex relational databases queries using SQL (Oracle, MySQL, PostgreSQL). Who you are: Excellent written, verbal, and interpersonal communication skills with the ability to present technical information in a clear and concise manner to IT Leaders and business stakeholders. Effective time management skills and ability to meet deadlines. Excellent communications skills interacting with technical and business audiences. Excellent organization, multitasking, and prioritization skills. Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends. Delivering project work on-time within budget with high quality. Demonstrated ability to run short development cycle.

Posted 1 month ago

Apply

8.0 - 12.0 years

35 - 50 Lacs

Bengaluru

Work from Office

My profile - linkedin.com/in/yashsharma1608 Position : AI Architect ( Gen AI ) Experience : 8 - 10 years Notice Period : Immediate to 15 days. Budget upto - 45 to 50 LPA Location : Bangalore. Note : - (any developer with minimum 3 to 4 years into AI), SaaS company mandatory. Product Based company Mandatory Discuss the feasibility of AI/ML use cases along with architectural design with business teams and translate the vision of business leaders into realistic technical implementation Play a key role in defining the AI architecture and selecting appropriate technologies from a pool of open-source and commercial offerings Design and implement robust ML infrastructure and deployment pipelines Establish comprehensive MLOps practices for model training, versioning, and deployment Lead the development of HR-specialized language models (SLMs) Implement model monitoring, observability, and performance optimization frameworks Develop and execute fine-tuning strategies for large language models Create and maintain data quality assessment and validation processes Design model versioning systems and A/B testing frameworks Define technical standards and best practices for AI development Optimize infrastructure for cost, performance, and scalability Required Qualifications 7+ years of experience in ML/AI engineering or related technical roles 3+ years of hands-on experience with MLOps and production ML systems Demonstrated expertise in fine-tuning and adapting foundation models Strong knowledge of model serving infrastructure and orchestration Proficiency with MLOps tools (MLflow, Kubeflow, Weights & Biases, etc.) Experience implementing model versioning and A/B testing frameworks Strong background in data quality methodologies for ML training Proficiency in Python and ML frameworks (PyTorch, TensorFlow, Hugging Face) Experience with cloud-based ML platforms (AWS, Azure, Google Cloud) Proven track record of deploying ML models at scale Preferred Qualifications Experience developing AI applications for enterprise software domains Knowledge of distributed training techniques and infrastructure Experience with retrieval-augmented generation (RAG) systems Familiarity with vector databases (Pinecone, Weaviate, Milvus) Understanding of responsible AI practices and bias mitigation Bachelor's or Master's degree in Computer Science, Machine Learning, or related field What We Offer Opportunity to shape AI strategy for a fast-growing HR technology leader Collaborative environment focused on innovation and impact Competitive compensation package Professional development opportunities Flexible work arrangements Qualified candidates who are passionate about applying cutting-edge AI to transform HR technology are encouraged to apply

Posted 1 month ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Location: Bangalore Experience: 5 - 7 Years Notice Period: Immediate to 15 Days Overview: We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines that support Machine Learning (ML) and analytics workloads. This role requires close collaboration with ML teams, ensuring high data quality, system reliability, and performance across complex data environments. Key Responsibilities: Collaborate with ML teams to deploy, monitor, and optimize models in production. Build and maintain scalable, high-performance data pipelines and infrastructure. Implement statistical analysis and experimental design to validate data and model performance. Ensure data quality, governance, and system efficiency in large-scale architectures. Monitor, troubleshoot, and optimize data systems and workflows. Required Skills: 56 years of experience in data engineering with a strong background in Python , SQL/NoSQL , and Apache Spark . Solid understanding of data warehousing , data modeling , and architecture principles . Experience with cloud platforms such as AWS , GCP , or Azure . Hands-on experience with ML pipeline tools (e.g., MLflow , Kubeflow ). Familiarity with search , recommendation systems , or NLP technologies . Strong grasp of statistics and experimental design in ML contexts. Proactive problem-solving skills and the ability to work independently or in teams.

Posted 1 month ago

Apply

11.0 - 20.0 years

40 - 50 Lacs

Pune, Chennai, Bengaluru

Hybrid

Senior xOps Specialist AIOps, MLOps & DataOps Architect Location: Chennai, Pune Employment Type: Fulltime - Hybrid Experience Required: 12-15 years Job Summary: We are seeking a Senior xOps Specialist to architect, implement, and optimize AI-driven operational frameworks across AIOps, MLOps, and DataOps. The ideal candidate will design and enhance intelligent automation, predictive analytics, and resilient pipelines for large-scale data engineering, AI/ML deployments, and IT operations. This role requires deep expertise in AI/ML automation, data-driven DevOps strategies, observability frameworks, and cloud-native orchestration. Key Responsibilities – Design & Architecture AIOps: AI-Driven IT Operations & Automation Architect AI-powered observability platforms, ensuring predictive incident detection and autonomous IT operations. Implement AI-driven root cause analysis (RCA) for proactive issue resolution and performance optimization. Design self-healing infrastructures leveraging machine learning models for anomaly detection and remediation workflows. Establish event-driven automation strategies, enabling autonomous infrastructure scaling and resilience engineering. MLOps: Machine Learning Lifecycle Optimization Architect end-to-end MLOps pipelines, ensuring automated model training, validation, deployment, and monitoring. Design CI/CD pipelines for ML models, embedding drift detection, continuous optimization, and model explainability. Implement feature engineering pipelines, leveraging data versioning, reproducibility, and intelligent retraining techniques. Ensure secure and scalable AI/ML environments, optimizing GPU-accelerated processing and cloud-native model serving. DataOps: Scalable Data Engineering & Pipelines Architect data processing frameworks, ensuring high-performance, real-time ingestion, transformation, and analytics. Build data observability platforms, enabling automated anomaly detection, data lineage tracking, and schema evolution. Design self-optimizing ETL pipelines, leveraging AI-driven workflows for data enrichment and transformation. Implement governance frameworks, ensuring data quality, security, and compliance with enterprise standards. Automation & API Integration Develop Python or Go-based automation scripts for AI model orchestration, data pipeline optimization, and IT workflows. Architect event-driven xOps frameworks, enabling intelligent orchestration for real-time workload management. Implement AI-powered recommendations, optimizing resource allocation, cost efficiency, and performance benchmarking. Cloud-Native & DevOps Integration Embed AI/ML observability principles within DevOps pipelines, ensuring continuous monitoring and retraining cycles. Architect cloud-native solutions optimized for Kubernetes, containerized environments, and scalable AI workloads. Establish AIOps-driven cloud infrastructure strategies, automating incident response and operational intelligence. Qualifications & Skills – xOps Expertise Deep expertise in AIOps, MLOps, and DataOps, designing AI-driven operational frameworks. Proficiency in automation scripting, leveraging Python, Go, and AI/ML orchestration tools. Strong knowledge of AI observability, ensuring resilient IT operations and predictive analytics. Extensive experience in cloud-native architectures, Kubernetes orchestration, and serverless AI workloads. Ability to troubleshoot complex AI/ML pipelines, ensuring optimal model performance and data integrity. Preferred Certifications (Optional): AWS Certified Machine Learning Specialist Google Cloud Professional Data Engineer Kubernetes Certified Administrator (CKA) DevOps Automation & AIOps Certification

Posted 1 month ago

Apply

5.0 - 7.0 years

30 - 32 Lacs

Hyderabad

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Hyderabad) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: InfraCloud Technologies Pvt Ltd) (*Note: This is a requirement for one of Uplers' client - IF) What do you need for this opportunity? Must have skills required: Banking, Fintech, Product Engineering background, Python, FastAPI, Django, MLFlow, feast, Kubeflow, NumPy, pandas, Big Data IF is Looking for: Product Engineer Location: N arsingi, Hyderabad 5 days of work from the Office Client is a Payment gateway processing company Interview Process: Screening round with InfraCloud, followed by a second round with our Director of Engineering. We share the profile with the client, and they take one/two interviews About the Project We are building a high-performance machine learning engineering platform that powers scalable, data-driven solutions for enterprise environments. Your expertise in Python, performance optimization, and ML tooling will play a key role in shaping intelligent systems for data science and analytics use cases. Experience with MLOps, SaaS products, or big data environments will be a strong plus. Role and Responsibilities Design, build, and optimize components of the ML engineering pipeline for scalability and performance. Work closely with data scientists and platform engineers to enable seamless deployment and monitoring of ML models. Implement robust workflows using modern ML tooling such as Feast, Kubeflow, and MLflow. Collaborate with cross-functional teams to design and scale end-to-end ML services across a cloud-native infrastructure. Leverage frameworks like NumPy, Pandas, and distributed compute environments to manage large-scale data transformations. Continuously improve model deployment pipelines for reliability, monitoring, and automation. Requirements 5+ years of hands-on experience in Python programming with a strong focus on performance tuning and optimization. Solid knowledge of ML engineering principles and deployment best practices. Experience with Feast, Kubeflow, MLflow, or similar tools. Deep understanding of NumPy, Pandas, and data processing workflows. Exposure to big data environments and a good grasp of data science model workflows. Strong analytical and problem-solving skills with attention to detail. Comfortable working in fast-paced, agile environments with frequent cross-functional collaboration. Excellent communication and collaboration skills. Nice to Have Experience deploying ML workloads in public cloud environments (AWS, GCP, or Azure). Familiarity with containerization technologies like Docker and orchestration using Kubernetes. Exposure to CI/CD pipelines, serverless frameworks, and modern cloud-native stacks. Understanding of data protection, governance, or security aspects in ML pipelines. Experience Required: 5+ years

Posted 1 month ago

Apply

6.0 - 11.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Machine Learning, ML, ml architectures and lifecycle, Airflow, Kubeflow, MLFlow, Spark, Kubernetes, Docker, Python, SQL, machine learning platforms, BigQuery, GCS, Dataproc, AI Platform, Search Ranking, Deep Learning, Deep Learning Frameworks, PyTorch, TensorFlow About the job Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. Who We Are Wayfairs Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. What youll do Provide technical leadership in the development of an automated and intelligent advertising system by advancing the state-of-the-art in machine learning techniques to support recommendations for Ads campaigns and other optimizations. Design, build, deploy and refine extensible, reusable, large-scale, and real-world platforms that optimize our ads experience. Work cross-functionally with commercial stakeholders to understand business problems or opportunities and develop appropriately scoped machine learning solutions Collaborate closely with various engineering, infrastructure, and machine learning platform teams to ensure adoption of best-practices in how we build and deploy scalable machine learning services Identify new opportunities and insights from the data (where can the models be improved? What is the projected ROI of a proposed modification?) Research new developments in advertising, sort and recommendations research and open-source packages, and incorporate them into our internal packages and systems. Be obsessed with the customer and maintain a customer-centric lens in how we frame, approach, and ultimately solve every problem we work on. We Are a Match Because You Have: Bachelor's or Masters degree in Computer Science, Mathematics, Statistics, or related field. 6-9 years of industry experience in advanced machine learning and statistical modeling, including hands-on designing and building production models at scale. Strong theoretical understanding of statistical models such as regression, clustering and machine learning algorithms such as decision trees, neural networks, etc. Familiarity with machine learning model development frameworks, machine learning orchestration and pipelines with experience in either Airflow, Kubeflow or MLFlow as well as Spark, Kubernetes, Docker, Python, and SQL. Proficiency in Python or one other high-level programming language Solid hands-on expertise deploying machine learning solutions into production Strong written and verbal communication skills, ability to synthesize conclusions for non-experts, and overall bias towards simplicity Nice to have Familiarity with Machine Learning platforms offered by Google Cloud and how to implement them on a large scale (e.g. BigQuery, GCS, Dataproc, AI Notebooks). Experience in computational advertising, bidding algorithms, or search ranking Experience with deep learning frameworks like PyTorch, Tensorflow, etc.

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Thiruvananthapuram

Work from Office

Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management.

Posted 1 month ago

Apply

5.0 - 7.0 years

14 - 16 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Job Title: Data/ML Platform Engineer Location: Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office) Notice Period: ImmediateiSource Services is hiring for one of their client for the position of Data/ML Platform Engineer. As a Data Engineer you will be relied on to independently develop and deliver high-quality features for our new ML Platform, refactor and translate our data products and finish various tasks to a high standard. Youll be part of the Data Foundation Team, which focuses on creating and maintaining the Data Platform for Marktplaats. 5 years of hands-on experience in using Python, Spark,Sql. Experienced in AWS Cloud usage and management. Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow). Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch. Experience with orchestrators such as Airflow and Kubeflow. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Fundamental understanding of Parquet, Delta Lake and other data file formats. Proficiency on an IaC tool such as Terraform, CDK or CloudFormation. Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst Location - Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office)

Posted 1 month ago

Apply

3.0 - 4.0 years

3 - 4 Lacs

Hyderabad, Telangana, India

On-site

Big Data Engineer / Infrastructure Developer Liaising with coworkers and clients to elucidate the requirements for each task Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed Reformulating existing frameworks to optimize their functioning Testing such structures to ensure that they are fit for use Preparing raw data for manipulation by data scientists Detecting and correcting errors in your work Ensuring that your work remains backed up and readily accessible to relevant coworkers Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs Experience with: Azure ADLS Apache Parquet Iceberg Kubeflow Airflow

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Chennai

Work from Office

Notice period: Immediate 15days Timings:1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri) We are seeking a strategic and innovative Senior Data Scientist to join our high-performing Data Science team. In this role, you will lead the design, development, and deployment of advanced analytics and machine learning solutions that directly impact business outcomes. You will collaborate cross-functionally with product, engineering, and business teams to translate complex data into actionable insights and data products. Key Responsibilities Lead and execute end-to-end data science projects, encompassing problem definition, data exploration, model creation, assessment, and deployment. Develop and deploy predictive models, optimization techniques, and statistical analyses to address tangible business needs. Articulate complex findings through clear and persuasive storytelling for both technical experts and non-technical stakeholders. Spearhead experimentation methodologies, such as A/B testing, to enhance product features and overall business outcomes. Partner with data engineering teams to establish dependable and scalable data infrastructure and production-ready models. Guide and mentor junior data scientists, while also fostering team best practices and contributing to research endeavors. Required Qualifications & Skills: Masters or PhD in Computer Science, Statistics, Mathematics, or a related 5+ years of practical experience in data science, including deploying models to Expertise in Python and SQL; Solid background in ML frameworks such as scikit-learn, TensorFlow, PyTorch, and Competence in data visualization tools like Tableau, Power BI, matplotlib, and Comprehensive knowledge of statistics, machine learning principles, and experimental Experience with cloud platforms (AWS, GCP, or Azure) and Git for version Exposure to MLOps tools and methodologies (e.g., MLflow, Kubeflow, Docker, CI/CD). Familiarity with NLP, time series forecasting, or recommendation systems is a Knowledge of big data technologies (Spark, Hive, Presto) is desirable

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies