Jobs
Interviews

1845 Mlflow Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Greater Kolkata Area

Remote

Job Title : ML Ops Engineer. Experience : 5 Years. Location : Remote (India). Job Overview We are seeking a highly skilled MLOps Engineer with over 5 years of experience in Software Engineering and Machine Learning Operations. The ideal candidate will have hands-on experience with AWS (particularly SageMaker), MLflow, and other MLOps tools, and a strong understanding of building scalable, secure, and production-ready ML systems. Key Responsibilities Design, implement, and maintain scalable MLOps pipelines and infrastructure. Work with cross-functional teams to support end-to-end ML lifecycle including model development, deployment, monitoring, and governance. Leverage AWS services, particularly SageMaker, to manage model training and deployment. Apply best practices for CI/CD, model versioning, reproducibility, and operational monitoring. Participate in MLOps research and help drive innovation across the team. Contribute to the design of secure, reliable, and scalable ML solutions in a production environment. Required Skills 5+ years of experience in Software Engineering and MLOps. Strong experience with AWS, especially SageMaker. Experience with MLflow or similar tools for model tracking and lifecycle management. Familiarity with AWS Data Zone is a strong plus. Proficiency in Python; experience with R, Scala, or Apache Spark is a plus. Solid understanding of software engineering principles, version control, and testing practices. Experience deploying and maintaining models in production environments. Preferred Attributes Strong analytical thinking and problem-solving skills. A proactive mindset with the ability to contribute to MLOps research and process improvements. Self-motivated and able to work effectively in a remote setting. (ref:hirist.tech)

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

kolkata, west bengal

On-site

You are a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. Your role will involve defining and driving the overall Azure-based data architecture strategy aligned with enterprise goals. You will architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Providing technical leadership on Azure Databricks for large-scale data processing and advanced analytics use cases is a crucial aspect of your responsibilities. Integrating AI/ML models into data pipelines and supporting the end-to-end ML lifecycle including training, deployment, and monitoring will be part of your day-to-day tasks. Collaboration with cross-functional teams such as data scientists, DevOps engineers, and business analysts is essential. You will evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure while mentoring data engineers and junior architects on best practices and architectural standards. Your role will require a strong background in data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficiency in SQL, Python, PySpark, and a solid understanding of AI/ML workflows and tools are necessary. Exposure to Azure DevOps and excellent communication and stakeholder management skills are also key requirements. As a Data Architect at Lexmark, you will play a vital role in designing and overseeing robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. If you are an innovator looking to make your mark with a global technology leader, apply now to join our team in Kolkata, India.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

At Apple, phenomenal ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. The Applied Machine Learning group is in search of a driven and results-oriented individual to help push the needle of what ML and AI can be used for, while researching new frontiers to raise the bar higher. In this role, as part of the Applied Machine Learning team, you will be responsible for conceptualizing, designing, implementing, and running cutting-edge solutions. You will leverage best-suited ML, AI, and NLP techniques and contribute to building reusable platforms. The team has a broad impact, providing exposure to cross-functional projects with challenging use cases where modern technologies can solve real-world problems. We are seeking someone with a proven track record in crafting and developing high-quality enterprise software solutions. This position requires a hands-on individual who is passionate about problem-solving, able to think creatively about solutions, and lead by example when implementing them. The ideal candidate should be self-starting, upbeat, with excellent written and communication skills, a hardworking collaborator, unafraid to question assumptions, and willing to take initiative. Minimum Qualifications: - Bachelor's in Computer Science or equivalent experience - 7+ years of experience in building and managing big-data platforms, with programming experience in Java - Working experience with search and information retrieval tools such as Lucene, Solr, Elasticsearch, Milvus, Vespa Preferred Qualifications: - Experience scaling distributed ML and AI systems to handle millions of concurrent requests - Experience with ML applications in cross-domain contexts - Experience with public cloud platforms like AWS/GCP - Good understanding of AI/ML stack including GPUs, MLFlow, LLM models - Understanding of data modeling, data warehousing, and ETL concepts - Experience in creating frameworks to deploy platforms in AWS/Azure/GCP - Ability to lead and mentor junior team members, provide technical guidance, and collaborate effectively with multi-functional teams - Commitment to staying updated with the latest advancements in machine learning and data science, and willingness to learn new tools and technologies as needed If you meet these qualifications and are excited about the opportunity to contribute to cutting-edge machine learning projects at Apple, we would love to hear from you. Please submit your CV for consideration.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer - Backend (Python) with 7+ years of experience, you will be responsible for designing and building the backend components of the GenAI Platform in Hyderabad. Your role will involve collaborating with geographically distributed cross-functional teams and participating in an on-call rotation to handle production incidents. The GenAI Platform offers safe, compliant, and cost-efficient access to LLMs, including Opensource & Commercial ones, while adhering to Experian standards and policies. You will work on building reusable tools, frameworks, and coding patterns for fine-tuning LLMs or developing RAG-based applications. To succeed in this role, you must possess the following skills: - 7+ years of professional backend web development experience with Python - Experience with AI and RAG - Proficiency in DevOps & IaC tools like Terraform, Jenkins - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow - Expertise in web development frameworks such as Flask, Django, or FastAPI - Knowledge of concurrent programming designs like AsyncIO - Experience with public cloud platforms like AWS, Azure, GCP (preferably AWS) - Understanding of CI/CD practices, tools, and frameworks Additionally, the following skills would be considered nice to have: - Experience with Apache Kafka and developing Kafka client applications in Python - Familiarity with big data processing frameworks, especially Apache Spark - Knowledge of containers (Docker) and container platforms like AWS ECS or AWS EKS - Proficiency in unit and functional testing frameworks - Experience with various Python packaging options such as Wheel, PEX, or Conda - Understanding of metaprogramming techniques in Python Join our team and contribute to the development of cutting-edge technologies in a collaborative and dynamic environment.,

Posted 2 weeks ago

Apply

14.0 - 18.0 years

0 Lacs

pune, maharashtra

On-site

You are hiring for the position of AVP - Databricks with a minimum of 14 years of experience. The role is based in Bangalore/Hyderabad/NCR/Kolkata/Mumbai/Pune. As an AVP - Databricks, your responsibilities will include leading and managing Databricks-based project delivery to ensure solutions are designed, developed, and implemented according to client requirements and industry standards. You will act as the subject matter expert on Databricks, providing guidance on architecture, implementation, and optimization to teams. Collaboration with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads is also a key aspect of the role. You will serve as the primary point of contact for clients to ensure alignment between business requirements and technical delivery. The qualifications we seek in you include a Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred). You should have relevant years of experience in IT services with a specific focus on Databricks and cloud-based data engineering. Preferred qualifications/skills for this role include proven experience in leading end-to-end delivery, solution and architecture of data engineering or analytics solutions on Databricks. Strong experience in cloud technologies such as AWS, Azure, GCP, data pipelines, and big data tools is desirable. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies is a plus. Expertise in data engineering concepts including ETL, data lakes, data warehousing, and distributed computing will be beneficial for this role.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

guwahati, assam

On-site

We are seeking a skilled and forward-thinking Lead Software Engineer specializing in Machine Learning with over 10 years of experience to spearhead the conceptualization, development, and implementation of cutting-edge machine learning solutions. This pivotal role necessitates robust leadership qualities, profound technical acumen, and a demonstrated track record in steering teams towards resolving intricate, large-scale challenges utilizing state-of-the-art ML technologies. In this leadership capacity, you will be responsible for mentoring teams, formulating technical roadmaps, and fostering collaboration across various departments to synchronize machine learning endeavors with business objectives. Your responsibilities will include defining and orchestrating the strategy and trajectory for ML systems and applications, as well as architecting and supervising the construction of adaptable machine learning systems and infrastructure. You will drive the creation and execution of sophisticated ML models and algorithms to tackle complex business issues, collaborate with multifaceted teams to discern ML use cases and prerequisites, and provide guidance to junior and mid-level engineers on optimal practices for ML development and deployment. It will also be imperative to oversee the performance enhancement of machine learning systems in operational settings, ensure adherence to industry standards and best practices in model development, data governance, and MLOps, and spearhead research endeavors to explore emerging ML methodologies and seamlessly integrate them into the organization's solutions. The ideal candidate should hold a Bachelor's or Master's degree in Computer Science, Machine Learning, Artificial Intelligence, or a related field, with a Ph.D. being an advantageous asset. Additionally, you should possess a minimum of 10 years of software engineering experience, with at least 5 years dedicated to machine learning, and exhibit proficiency in ML frameworks and libraries like TensorFlow, PyTorch, and scikit-learn. A strong grasp of designing and constructing large-scale, distributed ML systems, advanced knowledge of data engineering tools and frameworks such as Spark, Hadoop, or Kafka, hands-on experience with cloud platforms (AWS, GCP, Azure) for ML workloads, and expertise in deploying and managing ML models in production environments using MLOps tools like MLflow or Kubeflow are essential technical skills that you should bring to the table. Moreover, a deep understanding of algorithms, data structures, system design, containerization (Docker), orchestration (Kubernetes), and exceptional problem-solving capabilities are highly valued attributes for this role. Your soft skills should include robust leadership and decision-making prowess, exceptional problem-solving and analytical thinking, excellent communication aptitude to convey technical concepts to diverse audiences, and the ability to cultivate collaboration and drive innovation across teams. Preferred qualifications include a Master's degree in Computer Science, Information Technology, or a related field, familiarity with advanced techniques like generative AI, reinforcement learning, or federated learning, experience in constructing and managing real-time data processing pipelines, knowledge of data security and privacy best practices, and a track record of publications or patents in the domain of machine learning or artificial intelligence. Key Performance Indicators for this role encompass the successful delivery of scalable, high-impact ML solutions in alignment with business objectives, effective mentorship and upskilling of team members, continuous enhancement of ML system performance and reliability, and driving innovation and adoption of emerging ML techniques to sustain a competitive advantage.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

ahmedabad, gujarat

On-site

You are seeking a Senior AI/ML Architect with a cumulative experience of 10-12 years in software development and/or data analysis/machine learning to join the team in Ahmedabad. This is a full-time, onsite position suitable for individuals enthusiastic about designing and expanding production and enterprise AI/ML systems. Your responsibilities will include designing scalable AI/ML pipelines for training, inference, and deployment, leading MLOps architecture encompassing model versioning, monitoring, and CI/CD, translating business challenges into robust ML solutions, collaborating with cross-functional teams such as Product, Data, and Engineering, implementing responsible AI frameworks focusing on explainability and fairness, selecting and overseeing the AI/ML tech stack spanning cloud services, models, and tools, engaging in brainstorming sessions, and engaging in technical solutioning alongside the business and enterprise architect team, as well as managing client stakeholders effectively. Your expertise should cover Python, TensorFlow/PyTorch/Scikit-learn, MLflow, Airflow, Kubernetes, Docker, working with AWS/GCP/Azure for ML workflows, designing Data lake/Databricks Feature store, tackling model monitoring and drift, and possessing experience in NLP, CV, or forecasting models. Furthermore, it would be advantageous if you have exposure to LLMs, GPT, RAG, LangChain, expertise in GenAI or enterprise AI/ML implementation, and knowledge of vector databases and hybrid architectures. This is a full-time position with the flexibility of working day, morning, night, or US shifts, requiring your physical presence at the work location.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

vadodara, gujarat

On-site

As a Machine Learning Engineer, you will be responsible for designing and implementing scalable machine learning models throughout the entire lifecycle - from data preprocessing to deployment. Your role will involve leading feature engineering and model optimization efforts to enhance performance and accuracy. Additionally, you will build and manage end-to-end ML pipelines using MLOps practices, ensuring seamless deployment, monitoring, and maintenance of models in production environments. Collaboration with data scientists and product teams will be key in understanding business requirements and translating them into effective ML solutions. You will conduct advanced data analysis, create visualization dashboards for insights, and maintain detailed documentation of models, experiments, and workflows. Moreover, mentoring junior team members on best practices and technical skills will be part of your responsibilities to foster growth within the team. In terms of required skills, you must have at least 3 years of experience in machine learning development, with a focus on the end-to-end model lifecycle. Proficiency in Python using Pandas, NumPy, and Scikit-learn for advanced data handling and feature engineering is crucial. Strong hands-on expertise in TensorFlow or PyTorch for deep learning model development is also a must-have. Desirable skills include experience with MLOps tools like MLflow or Kubeflow for model management and deployment, familiarity with big data frameworks such as Spark or Dask, and exposure to cloud ML services like AWS SageMaker or GCP AI Platform. Additionally, working knowledge of Weights & Biases and DVC for experiment tracking and versioning, as well as experience with Ray or BentoML for distributed training and model serving, will be considered advantageous. Join our team and contribute to cutting-edge machine learning projects while continuously improving your skills and expertise in a collaborative and innovative environment.,

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Technical Lead / Architect Location: Pune, India Employment Type: Full‑Time You will be the keystone of our engineering organization—defining and evangelizing the vision for scalable, secure, data‑driven systems. You’ll partner with product management, DevOps, data engineering, and QA to translate high‑level business objectives into robust technical roadmaps, ensuring consistency, reliability, and performance across our platform. What You’ll Do (Key Responsibilities) Architectural Leadership Define and document end‑to‑end solution architectures, spanning cloud infrastructure, microservices, data stores, and integration patterns. Establish and govern architecture standards, design patterns, and best practices. Hands‑On Development & Prototyping Build proofs‑of‑concept and reference implementations in Java, Python, or Node.js using modern frameworks (Flask, FastAPI, Celery). Optimize data models and queries for both relational (PostgreSQL) and NoSQL (MongoDB) stores, leveraging Redis for high‑performance caching. Cloud‑Native Excellence Design and implement containerized applications on Kubernetes; orchestrate deployments with CI/CD pipelines. Author and maintain Infrastructure as Code (Terraform, CloudFormation) to ensure reproducible, secure, cost‑efficient environments. Collaborative Mentorship Act as the primary technical advisor for cross‑functional teams; run architecture review boards and technical deep‑dives. Coach and upskill engineers on cloud‑native development, DevOps best practices, and secure coding principles. Stakeholder Engagement Translate complex technical concepts into clear, actionable roadmaps for business stakeholders. Partner closely with Product and Operations to balance feature velocity with system stability and cost optimization. What We’re Looking For (Must Have) 5+ years in a software architecture or technical leadership role, ideally within a cloud‑first organization. Deep, demonstrable experience architecting microservice‑based systems on AWS , Azure , or GCP ; Associate‑ or Professional‑level certification preferred. Proficiency in Java , Python , and Node.js , and frameworks such as Flask , FastAPI , or Celery . Strong track record of designing solutions for both structured (PostgreSQL) and unstructured (MongoDB) data, including data modelling and query optimization. Hands‑on expertise with Kubernetes , Docker, and CI/CD toolchains (Jenkins, GitLab CI/CD, CircleCI, etc.). Solid knowledge of Infrastructure as Code (Terraform, CloudFormation) and how to build secure, scalable, repeatable pipelines. Excellent verbal and written communication skills; proven ability to influence at all levels and mentor junior engineers. Nice to Have (Optional Skills) Experience building MLOps pipelines (e.g., MLflow) and monitoring stacks (ELK, Prometheus/Grafana). Familiarity with GenAI frameworks (LangChain, LlamaIndex), vector databases (Milvus, ChromaDB), and multi‑component AI pipelines. Background in event‑driven and serverless architectures (e.g., AWS Lambda, Azure Functions). Deep understanding of security, compliance standards (ISO, SOC 2), and cloud cost‑optimization strategies. What We Offer Cutting‑Edge Projects: Work with the latest cloud, data, and AI/ML technologies to solve real‑world challenges. Career Growth: Clear advancement paths, mentorship programmes, and budget for certifications & conferences. Inclusive Culture: A diverse, collaborative environment where your ideas are heard and valued. Competitive Compensation: Attractive salary, performance bonus, stock options, and comprehensive health benefits.

Posted 2 weeks ago

Apply

2.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a NLP & Generative AI Engineer at Gigaforce, you will be part of our dynamic AI/ML team based in Noida, working on cutting-edge technologies to revolutionize the insurance claims processing industry. We are a California-based InsurTech company with a strong focus on innovation and digital transformation in the Property and Casualty sector. You must have a minimum of 2 years of hands-on experience in traditional machine learning, natural language processing, and modern generative AI techniques to be considered for this role. We are looking for individuals who are passionate about deploying GenAI solutions to production, working with open-source technologies, and handling document-centric pipelines efficiently. Your main responsibilities will include building and deploying NLP and GenAI-driven products focusing on document understanding, summarization, classification, and retrieval. You will be designing and implementing models using LLMs such as GPT, T5, and BERT, along with working on scalable, cloud-based pipelines for training, serving, and monitoring models. Collaboration is key at Gigaforce, and you will work closely with cross-functional teams including data scientists, ML engineers, product managers, and developers. Additionally, you will contribute to open-source tools and frameworks in the ML ecosystem, deploying production-ready solutions using MLOps practices, and working on distributed/cloud systems with GPU-accelerated workflows. To be successful in this role, you should have a strong grasp of traditional ML algorithms, NLP fundamentals, and modern NLP & GenAI models. Proficiency in Python, experience with cloud platforms like AWS SageMaker, GCP, or Azure ML, and familiarity with MLOps tools and distributed computing are essential. Experience with document processing pipelines and understanding insurance-related documents is a plus. If you are looking for a challenging role where you can lead the design, development, and deployment of innovative AI/ML solutions in the insurance industry, then this position at Gigaforce is the perfect opportunity for you. Join us in our mission to transform the claims lifecycle into an intelligent, end-to-end digital experience.,

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Generative AI Engineer (Hands-on Development Role) Location: Winfo Solutions Job Type: Full-time Experience: 6-10 years Industry: IT Consulting About The Role We are looking for a Generative AI Engineer with strong software engineering skills and hands-on experience in developing AI-powered applications . This role involves coding AI-driven solutions, developing APIs, integrating models into products, and building scalable AI systems for production use. Key Responsibilities Develop AI-driven applications by integrating LLMs and generative models into real-world software solutions. Design and build APIs for LLMs and GenAI models (FastAPI, Flask, Django). Write production-ready code to integrate AI features into web applications, chatbots, document processing tools, and recommendation systems. Fine-tune and optimize LLMs (GPT, LLaMA, Gemini, Claude, Mistral) for performance in real-time applications. Develop AI-powered chatbots, document extraction tools, and automation systems using LangChain, Haystack, or Semantic Kernel. Implement vector search using FAISS, ChromaDB, or Pinecone for retrieval-augmented generation (RAG) applications. Work with databases (PostgreSQL, MongoDB, Redis) to manage AI-driven data workflows. Build scalable AI microservices and integrate them into enterprise applications. Collaborate with backend engineers, data engineers, and DevOps teams to ensure seamless deployment of AI models. Required Skills & Qualifications Bachelor’s or Master’s in Computer Science, Software Engineering, AI, or related field. 6+ years of experience in software development, AI engineering, or full-stack AI application development. Strong Python coding skills with experience in frameworks like FastAPI, Flask, Django. Hands-on experience developing APIs for AI models and integrating them into real-world applications. Experience with LLM APIs (OpenAI, Gemini, Claude, Mistral, etc.) and fine-tuning custom models. Strong knowledge of vector databases (FAISS, ChromaDB, Pinecone) for AI-powered search. Experience working with WebSockets, asynchronous programming, and RESTful APIs. Database experience with PostgreSQL, MongoDB, or Redis for AI-driven applications. Experience with MLOps pipelines (MLflow, Airflow, Prefect) for model retraining and monitoring. Good To Have Skills & Qualifications Familiarity with containerization (Docker, Kubernetes) and cloud-based AI deployments (AWS, GCP, Azure). Basics of probabilistic models, deep learning techniques, NLP, and embeddings.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Software Engineer at Wells Fargo, you will take the lead in driving complex technology initiatives that have a companywide impact. You will be instrumental in setting standards and best practices for engineering large-scale technology solutions across various technology disciplines. Your responsibilities will include designing, coding, testing, debugging, and documenting projects and programs. Your role will involve reviewing and analyzing complex technology solutions to address tactical and strategic business objectives within the enterprise technological environment. You will be making critical decisions in developing best practices for engineering solutions, leveraging industry standards and new technologies to guide and influence the technology team towards meeting deliverables and driving new initiatives. Collaboration and consultation with technical experts, senior technology team members, and external industry groups will be key to resolving technical challenges and achieving set goals. Moreover, you will lead projects, teams, or serve as a mentor to peers. The job requires a minimum of 5 years of Software Engineering experience or an equivalent combination of work experience, training, military service, or education. Desired qualifications for this role include strong Python programming skills, expertise in RPA tools like UI Path, proficiency in Workflow automation tools such as Power Platform, and a minimum of 2 years of hands-on experience in AI/ML and Gen AI. Additionally, experience in LLMs (Gemini, GPT, or Llama), prompt engineering, model fine-tuning, and AI/Gen AI certifications from premier institutions are preferred. Hands-on experience in ML ops tools like MLflow and CICD pipelines is also valued. In terms of job expectations, you will be involved in designing and developing AI-driven automation solutions, implementing AI automation to improve process automation, developing and maintaining automation, BOTS, and AI-based workflows, integrating AI automation with existing applications, API's and databases, and building and optimizing prompt engineering workflows. Furthermore, you will fine-tune and integrate pre-trained models for specific use cases and deploy models in production using robust ML ops practices. If you believe you are the ideal candidate for this role, we encourage you to apply before the job posting ends on 23rd July 2025. Wells Fargo is committed to providing equal opportunities, and we welcome applications from all qualified candidates, including women, persons with disabilities, aboriginal peoples, and visible minorities. If you require any accommodations during the recruitment process due to a disability, please reach out to Disability Inclusion at Wells Fargo.,

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Min Exp - 5-8 years Location - PAN India Engagement & Project Overview An AI model trainer brings specialised knowledge in developing and fine-tuning machine learning models. They can ensure that your models are accurate, efficient, and tailored to your specific needs. Hiring an AI model trainer and tester can significantly enhance our data management and analytics capabilities Job Description Expertise in Model Development: Develop and fine-tune machine learning models. Ensure models are accurate, efficient, and tailored to our specific needs. Quality Assurance: Rigorously evaluate models to identify and rectify errors. Maintain the integrity of our data-driven decisions through high performance and reliability. Efficiency and Scalability: Streamline processes to reduce time-to-market. Scale AI initiatives and ML engineering skills effectively with dedicated model training and testing. Production ML Monitoring & MLOps: Implement and maintain model monitoring pipelines to detect data drift, concept drift, and model performance degradation. Set up alerting and logging systems using tools such as Evidently AI, WhyLabs/Prometheus + Grafana or cloud-native solutions (AWS SageMaker Monitor, GCP Vertex AI, Azure Monitor). Collaborate with teams to integrate monitoring into CI/CD pipelines, using platforms like Kubeflow, MLflow, Airflow, and Neptune.ai. Define and manage automated retraining triggers and model versioning strategies. Ensure observability and traceability across the ML lifecycle in production environments. Qualifications Qualifications: 5+ years of experience in the respective field. Proven experience in developing and fine-tuning machine learning models. Strong background in quality assurance and model testing. Ability to streamline processes and scale AI initiatives. Innovative mindset with a keen understanding of industry trends. License/Certification/Registration

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

ML Engineer Location: [India] Job Type: Full-time, Remote Experience Level: Mid-Level About The Role We are looking for an MLOps Engineer to help scale, deploy, and maintain machine learning (ML) and Generative AI (GenAI) based solutions. In this role, you will work with cloud-based ML infrastructure to optimize model training, deployment, and monitoring. The ideal candidate has hands-on experience in MLOps, model deployment, and cloud scaling (Azure or Oracle stack). While experience with GenAI is a plus, we are looking for someone with a strong learning mindset, a willingness to experiment, and the ability to adapt quickly to new challenges. Key Responsibilities Develop and scale MLOps pipelines to support ML & GenAI based solutions. Automate model retraining, deployment, and monitoring to ensure seamless operation in production. Optimize compute, storage, and networking for AI workloads to improve efficiency. Work with cloud ML platforms such as Azure ML and/or OCI AI Services, or equivalent infrastructure. Implement observability and performance tracking to monitor deployed models. Explore and experiment with GenAI scaling strategies for better efficiency and performance. Requirements Minimum experience : 3 years Preferred education: Bachelor's or Master's Degree in Data Science/Computer Science Proficiency in Python for ML development and automation. Hands-on experience with Azure or Oracle Cloud stack. Proven ability to deploy and scale ML models effectively. Strong knowledge of MLOps tools, such as MLflow, Kubeflow, CI/CD, Docker, and Kubernetes. Experience with monitoring and optimizing ML pipelines to ensure performance and reliability. Excellent communication skills and the ability to thrive in a fast-paced, collaborative environment. What We Offer: 🌟 Career Growth: Opportunity to grow within a rapidly expanding organization. 🌟 Cutting-Edge Technology: Work with a Microsoft Gold Partner organization and the latest technologies. 🌟 People-First Culture: Be part of a company that values its employees. 🌟 Comprehensive Insurance Coverage: Company-paid Group Mediclaim Insurance for employees, spouses, and up to 2 kids (INR 4,00,000 per annum). Company-paid Group Personal Accidental Insurance (INR 10,00,000 per annum). 🌟 Career Development: Company-paid & Manager-approved career advancement opportunities. 🌟 Referral Bonus: Best-in-the-industry referral bonus policy. 🌟 Work-Life Balance: 29 paid leaves per year . 🌟 Maternity Benefits: Company-paid maternity leave About The Company We are a global team of innovators and advocates transforming how financial data is captured, stored, and manipulated with our comprehensive suite of automation technology. Our platform seamlessly integrates with your existing ERP for an unrivaled end-user experience. We do the heavy lifting so accounting, procurement, and fundraising teams can do their best work. PairSoft’s aspires to be the strongest procure-to-pay platform for the mid-market and enterprise, with close integration to Microsoft Dynamics, Blackbaud, Oracle, SAP, Acumatica and Sage ERPs. At PairSoft, we are passionate about innovation, transparency, diversity, and advocating on behalf of our customers and communities we support. We offer exciting career opportunities and a collaborative culture that allows individuals to learn, grow, and create meaningful impact. We are expanding and seeking team players who are eager to jump in and contribute to our rapid growth! PairSoft is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status or any other protected status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires accommodation, please email us at: careers@pairsoft.com. To read our Candidate Data Privacy Notice - including GDPR - click here. Powered by JazzHR HNmQZ7HNbD

Posted 2 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking a Senior Software Engineer. In this role, you will: Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong Python programming skills Expertise in RPA tools such as UI path Expertise in Workflow automation tools such as Power Platform Minimum 2 Years of hands-on experience in AI/ML and Gen AI Proven experience in LLMs (Gemini, GPT or Llama etc.) Extensive experience in Prompt Engineering and model fine tuning AI/Gen AI Certifications from premier institution Hands on experience in ML ops (MLflow, CICD pipelines) Job Expectations: Design and develop AI driven Automation solutions Implement AI automation to enhance process automation Develop and maintain automation, BOTS and AI based workflows Integrate Ai automation with existing applications, APIs and databases Design, develop and implement Gen AI applications using LLMs Build and optimize prompt engineering workflows Fine tune and integrate pre-trained models for specific use cases Deploy models in production using robust MLops practices

Posted 2 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking a Senior Software Engineer. In this role, you will: Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong Python programming skills Expertise in RPA tools such as UI path Expertise in Workflow automation tools such as Power Platform Minimum 2 Years of hands-on experience in AI/ML and Gen AI Proven experience in LLMs (Gemini, GPT or Llama etc.) Extensive experience in Prompt Engineering and model fine tuning AI/Gen AI Certifications from premier institution Hands on experience in ML ops (MLflow, CICD pipelines) Job Expectations: Design and develop AI driven Automation solutions Implement AI automation to enhance process automation Develop and maintain automation, BOTS and AI based workflows Integrate Ai automation with existing applications, APIs and databases Design, develop and implement Gen AI applications using LLMs Build and optimize prompt engineering workflows Fine tune and integrate pre-trained models for specific use cases Deploy models in production using robust MLops practices

Posted 2 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking a Senior Software Engineer. In this role, you will: Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong Python programming skills Expertise in RPA tools such as UI path Expertise in Workflow automation tools such as Power Platform Minimum 2 Years of hands-on experience in AI/ML and Gen AI Proven experience in LLMs (Gemini, GPT or Llama etc.) Extensive experience in Prompt Engineering and model fine tuning AI/Gen AI Certifications from premier institution Hands on experience in ML ops (MLflow, CICD pipelines) Job Expectations: Design and develop AI driven Automation solutions Implement AI automation to enhance process automation Develop and maintain automation, BOTS and AI based workflows Integrate Ai automation with existing applications, APIs and databases Design, develop and implement Gen AI applications using LLMs Build and optimize prompt engineering workflows Fine tune and integrate pre-trained models for specific use cases Deploy models in production using robust MLops practices 23 Jul 2025 To request a medical accommodation during the application or interview process, visit . Wells Fargo maintains a drug free workplace. Please see our to learn more.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 32 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Major Tasks of Position: Design, develop and maintain UI applications and deploying/managing machine learning models. Design, build, and maintain Python and Flask applications using AWS and/or Azure cloud services. Implement standards for data ingestion, storage, and processing to support analytics and machine learning workflows. Understand graph theory and network analysis, including familiarity with libraries such as NetworkX, igraph, or similar. Implement and manage CI/CD pipelines for automated testing, deployment, and monitoring of machine learning models. Collaborate with data scientists, machine learning engineers, and software developers to operationalize machine learning models. Design and maintain infrastructure for automated deployment and scaling. Ensure compliance with security, privacy, and data governance requirements. Key Working Relation: Qualification & Competencies: Bachelors degree in computer science, engineering, or a related field, or equivalent practical experience with at least 8-10 years of combined experience as a Python and MLOps Engineer or similar roles. Strong programming skills in Python. Proficiency with AWS and/or Azure cloud platforms, including services such as EC2, S3, Lambda, SageMaker, Azure ML, etc. Solid understanding of API programming and integration. Hands-on experience with CI/CD pipelines, version control systems (e.g., git), and code repositories. Knowledge of containerization using Docker, Kubernetes and orchestration tools. Proficiency in creating data visualizations specifically for graphs and networks using tools like Matplotlib, Seaborn, or Plotly. Understanding of data manipulation and analysis using libraries such as Pandas and Numpy Problem-solving, analytical expertise, and troubleshooting abilities with attention to details.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Greater Hyderabad Area

Remote

Job Title:AI/ML Engineer / Data Scientist - with Databricks focus Experience: 8+Years Work type: Remote (India) Key Responsibilities: • Develop, deploy, and maintain scalable MLOps pipelines for both traditional ML and Generative AI use cases leveraging Databricks (Unity Catalog, Delta Tables, Inference Tables, Mosaic AI). • Operationalize large language models (LLMs) and other GenAI models, ensuring efficient prompt engineering, fine-tuning, and serving. • Implement model tracking, versioning, and experiment management using MLflow. • Build robust CI/CD pipelines for ML and GenAI workloads to automate testing, validation, and deployment to production. • Use Vertex AI to manage training, deployment, and monitoring of ML and GenAI models in the cloud. • Integrate high-quality, governed data pipelines that enable ML and Generative AI solutions with strong lineage and reproducibility. • Design and enforce AI Governance frameworks covering model explainability, bias monitoring, data access, compliance, and audit trails. • Collaborate with data scientists and GenAI teams to productionize prototypes and research into reliable, scalable products. • Monitor model performance, usage, and drift — including specific considerations for GenAI systems such as hallucination checks, prompt/response monitoring, and user feedback loops. • Stay current with best practices and emerging trends in MLOps and Generative AI. Key Qualifications: Must Have Skills: • 3+ years of experience in MLOps, ML Engineering, or related field. • Hands-on experience with operationalizing ML and Generative AI models in production. • Proficiency with Databricks (Unity Catalog, Delta Tables, Mosaic AI, Inference Tables). • Experience with MLflow for model tracking, registry, and reproducibility. • Strong understanding of Vertex AI pipelines and deployment services. • Expertise in CI/CD pipelines for ML and GenAI workloads (e.g., GitHub Actions, Azure DevOps, Jenkins). • Proven experience in integrating and managing data pipelines for AI, ensuring data quality, versioning, and lineage. • Solid understanding of AI Governance, model explainability, and responsible AI practices. • Proficiency in Python, SQL, and distributed computing frameworks. • Excellent communication and collaboration skills. Nice to Have: • Experience deploying and monitoring Large Language Models (LLMs) and prompt-driven AI workflows. • Familiarity with vector databases, embeddings, and retrieval-augmented generation (RAG) architectures. • Infrastructure-as-Code experience (Terraform, CloudFormation). • Experience working in regulated industries (e.g., finance, Retail) with compliance-heavy AI use cases.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

At Amaris Consulting, we’re on the lookout for bold, versatile, and forward-thinking individuals to join our Data & AI Center of Excellence as Data Consultants. Whether your strength lies in analytics, engineering, or machine learning—your expertise belongs here. What does it mean to be a Data Consultant at Amaris? As a Data Consultant, you’ll be at the heart of strategic and technical projects for top-tier organizations. From building scalable data pipelines to deploying cutting-edge ML models, your work will directly shape how clients turn raw data into real-world impact. You'll collaborate across teams, industries, and geographies—delivering solutions that matter. Who we’re looking for: Data Engineer You don’t just work with data—you build the engines that power data-driven products. You’re fluent in Python and SQL, and you know how to architect clean, scalable pipelines that deliver results. You’ve worked with AI-enabled solutions, integrating pre-trained models, embeddings, and computer vision into production environments. You love solving problems, thrive in fast-paced product teams, and feel right at home in client-facing settings and global, cross-functional collaborations. 🔥 What You’ll Do as a Data Consultant: Work in cross-functional teams with engineers, scientists, analysts, and project managers Build and optimize data pipelines for AI and product development use cases Collaborate with AI teams to operationalize models, including vision and NLP-based pre-trained systems Participate in client discussions to translate technical needs into valuable solutions Ensure code quality and scalability using best practices in Python and SQL Shape and implement technical solutions across cloud, hybrid, or on-prem environments Support product development initiatives by embedding data capabilities into features Contribute to internal R&D and knowledge-sharing efforts within the CoE Our Environment & Tech Stack: We’re tech-agnostic and pragmatic: we adapt our stack to each client’s needs. Some of the most used technologies include: Languages: Python, SQL AI & ML: Pre-trained models, embedding models, computer vision frameworks Cloud platforms: Azure, AWS, GCP Orchestration & Transformation: Airflow, dbt, Kedro Big Data & Storage: Spark, Databricks, Snowflake MLOps & DevOps: MLflow, Docker, Git, CI/CD pipelines Product & API Development: REST APIs, microservices (bonus) 🎯 Your Profile: 4–5 years of experience as a Data Engineer Excellent skills in Python and SQL Experience with pre-trained models, embeddings, and computer vision Exposure to product development and AI integration in live environments Comfortable in client-facing roles and interacting with international teams Strong communicator with the ability to explain complex topics to both technical and business audiences Fluent in English; additional languages are a plus Autonomous, proactive, and a continuous learner 🚀 Why Join our Data & AI Center of Excellence? Work with major clients across Europe and globally on impactful projects Join a community of 600+ data professionals in our Center of Excellence Access continuous upskilling, tech exchanges, and mentorship opportunities Grow into technical leadership, architecture, or specialized AI domains 💡 We are an independent company that values: Agility – thrive in a flexible, dynamic, and stimulating environment International scope – engage in daily cross-border collaboration and mobility in 60+ countries Intrapreneurship – contribute to transversal topics or launch your own initiatives Attentive management – benefit from personalized support and career development Amaris Consulting is proud to be an equal-opportunity workplace. We are committed to promoting diversity and creating an inclusive work environment. We welcome applications from all qualified individuals, regardless of gender, orientation, background, or ability.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

KLA Overview: KLA is a global leader in diversified electronics for the semiconductor manufacturing ecosystem. Virtually every electronic device in the world is produced using our technologies. No laptop, smartphone, wearable device, voice-controlled gadget, flexible screen, VR device or smart car would have made it into your hands without us. KLA invents systems and solutions for the manufacturing of wafers and reticles, integrated circuits, packaging, printed circuit boards and flat panel displays. • Design, implement, and maintain scalable and reliable machine learning infrastructure. • Collaborate with data scientists and machine learning engineers to deploy and manage machine learning models in production. • Develop and maintain CI/CD pipelines for machine learning workflows. • Monitor and optimize the performance of machine learning systems and infrastructure. • Implement and manage automated testing and validation processes for machine learning models. • Ensure the security and compliance of machine learning systems and data. • Troubleshoot and resolve issues related to machine learning infrastructure and workflows. • Document processes, procedures, and best practices for machine learning operations. • Stay up-to-date with the latest developments in MLOps(clearMl/kuberflow/Mlflow) and related technologies. Qualifications: Required: •Bachelor's degree in Computer Science, Engineering, or a related field. •Proven experience as a Site Reliability Engineer (SRE) or in a similar role. •Strong knowledge of machine learning concepts and workflows. •Proficiency in programming languages such as Python, Java, or Go. •Experience with cloud platforms such as AWS, Azure, or Google Cloud. •Familiarity with containerization technologies like Docker and Kubernetes. •Experience with CI/CD tools such as Jenkins, GitLab CI, or CircleCI. •Strong problem-solving skills and the ability to troubleshoot complex issues. •Excellent communication and collaboration skills. If interested, please share your updated resume via kavitha.jayakumar@kla.com

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps/MLOps Expert Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

PwC AC is hiring for Data scientist Apply and get a chance to work with one of the Big4 companies #PwC AC. Job Tit le : Data scientist Years of Experienc e: 3-7 years Shift Timin gs: 11AM-8PM Qualificati on: Graduate and above(Full time) About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 3 to 7 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI , including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Good to Have Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems. Soft Skills & Team Expectations Strong written and verbal communication; able to explain complex models to business stakeholders. Ability to independently document work, manage requirements, and self-drive technical discovery. Desire to innovate, improve, and automate existing processes and solutions. Active contributor to team knowledge sharing, technical forums, and innovation drives. Strong interpersonal skills to build relationships across cross-functional teams. A mindset of continuous learning and technical curiosity. Preferred Certifications (at least two are preferred) Certifications in Machine Learning, Deep Learning, or Natural Language Processing. Python programming certifications (e.g., PCEP/PCAP). Cloud certifications (Azure/AWS/GCP) such as Azure AI Engineer, AWS ML Specialty, etc. Why Join PwC CTIO? Be part of a mission-driven AI innovation team tackling industry-wide transformation challenges. Gain exposure to bleeding-edge GenAI research, rapid prototyping, and product development. Contribute to a diverse portfolio of AI solutions spanning pharma, finance, and core business domains. Operate in a startup-like environment within the safety and structure of a global enterprise. Accelerate your career as a deep tech leader in an AI-first future.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Work Location: In person

Posted 2 weeks ago

Apply

170.0 years

2 - 5 Lacs

Gurgaon

On-site

G+D makes the lives of billions of people around the world more secure. We create trust in the digital age with integrated security technologies in three business areas: Digital Security, Financial Platforms and Currency Technology. We have been a reliable partner for our customers for over 170 years with our innovative solutions for SecurityTech! We are an international technology group and traditional family business with over 14,000 employees in 40 countries. Creating Confidence is our path to success. Trust is the basis of our co-operation within G+D. The whole world trusts us when it comes to physical or digital currencies. We increase the security and efficiency of the cash cycle in collaboration with central banks and the entire currency industry. As the market leader in advanced currency management, would you like to join us in shaping the future of payments? Job Title: Senior MLOps & AI Developer Job Description We are seeking a highly accomplished Senior MLOps & AI Developer who combines a strong development background with deep experience in designing and managing AI/ML infrastructure. In this role, you will develop, deploy, and maintain scalable AI solutions focused on high accuracy. Your expertise will not only drive robust MLOps pipelines and hybrid cloud solutions but also extend to full-stack development, ensuring an end-to-end AI application experience. This includes handling database solutions, optimizing data workflows, and developing intuitive front-end applications using Angular and related technologies. Key Responsibilities AI/ML Infrastructure Development: Plan, build, and continuously enhance infrastructure for model training, deployment, and live monitoring, ensuring scalability, high availability, and robust security. Design End-to-End MLOps Pipelines: Develop and optimize CI/CD pipelines that automate the process of building, testing, and deploying AI models, emphasizing cost-efficiency, performance, and seamless integration. Scalable AI Solutions with High Accuracy: Engineer and implement AI solutions that focus on driving higher model accuracy through advanced techniques such as chunking and embedding. Work with multiple large language models (LLMs) to create optimal solutions. Database Management & Indexing: Set up and manage various data storage systems including indexers, PostgreSQL, MySQL, and other non-SQL databases to support dynamic and scalable AI workflows. Hybrid Cloud & On-Premise Integration: Manage and optimize resources across major cloud platforms (with Azure as the primary focus) along with on-premise systems, addressing the unique challenges of hybrid environments. Full-Stack Development for AI Applications: Develop responsive, user-friendly front-end applications using Angular and related technologies, ensuring a seamless end-to-end AI application experience that complements the back-end infrastructure. Collaboration Across Teams: Work closely with AI/ML developers, data scientists, and DevOps teams, driving innovation and troubleshooting challenges to streamline development, deployment, and performance optimization. Security & Performance Optimization: Implement best practices in networking, security protocols, and performance tuning to maintain a secure and efficient operational environment. Required Skills & Qualifications Proven Development Expertise: 8-10 years of hands-on experience in software development, with strong proficiency in programming (e.g., Python) and experience in agile software development practices such as version control, code reviews, and testing. MLOps & Infrastructure Proficiency: Demonstrable expertise using tools like Kubernetes, Docker, Terraform, and managing cloud resources (with a strong command of Azure). Experience with hybrid cloud architectures and effectively handling on-premise environments is essential. AI & ML Integration: Deep understanding of AI/ML workflows including model training, deployment, and performance tuning. Familiarity with advanced techniques like chunking and embedding, and proven capability in integrating multiple LLM models to drive higher accuracy. Database & Indexer Expertise: Experience in setting up and managing relational databases such as PostgreSQL and MySQL, as well as non-SQL databases. Ability to design and implement efficient indexing solutions conducive to AI data workloads. Front-End Development Skills: Proficiency in Angular and related web development technologies (e.g., TypeScript, HTML, CSS) to create integrated, end-to-end AI application interfaces. Technical Acumen: Strong background in networking, security protocols, and system performance optimization ensuring robust, secure, and high-performing AI/ML systems. Preferred Qualifications Hybrid Systems Management: Experience managing complex infrastructures that span on-premise and cloud-based resources, demonstrating a clear grasp of the nuances underlying hybrid environments. GPU-Based and Advanced AI Deployment: Prior exposure to GPU-based processing and AI model deployment strategies, leveraging advanced hardware capabilities for optimized training and inference. Machine Learning Frameworks: Familiarity with leading machine learning frameworks (e.g., TensorFlow, PyTorch, MLflow) to better collaborate with data science teams and enhance AI solution development. A look behind the scenes Contact Arvina Mehta arvina.mehta@gi-de.com JOB OFFER Job Details Job Title Senior MLOps & AI Developer Business Sector Giesecke & Devrient India Private Limited Plot No. 02 EHTP Sector - 34 Gurugram – 122001 Requisition ID 25718 Location Gurugram, Hary, IN Career level Experienced and Graduates Job Type Fulltime, Permanent Contact Arvina Mehta arvina.mehta@gi-de.com We are an equal opportunity employer! We promote diversity in all its forms and create an inclusive work environment, free from prejudice, discrimination and harassment, in which all employees feel a sense of belonging. We warmly welcome all applications regardless of gender, age, race or ethnic origin, social and cultural background, religion, disability and sexual orientation.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies