Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a skilled MLOps Support Engineer, you will be responsible for monitoring and managing ML model operational pipelines in AzureML and MLflow. Your primary focus will be on automation, integration validation, and CI/CD pipeline management to ensure stability and reliability in model deployment lifecycles. Your objectives in this role include supporting and monitoring MLOps pipelines in AzureML and MLflow, managing CI/CD pipelines for model deployment and updates, handling model registry processes, performing testing and validation of integrated endpoints, automating monitoring and upkeep of ML pipelines, as well as troubleshooting and resolving pipeline and integration-related issues. In your day-to-day responsibilities, you will support production ML pipelines using AzureML and MLflow, configure and manage model versioning and registry lifecycle, automate alerts, monitoring tasks, and routine pipeline operations, validate REST API endpoints for ML models, implement CI/CD workflows for ML deployments, document and troubleshoot operational issues related to ML services, and collaborate with data scientists and platform teams to ensure delivery continuity. To excel in this role, you should possess proficiency in AzureML, MLflow, and Databricks, have a strong command over Python, experience with Azure CLI and scripting, a good understanding of CI/CD practices in MLOps, knowledge of model registry management and deployment validation, and at least 3-5 years of relevant experience in MLOps environments. While not mandatory, it would be beneficial to have skills such as exposure to monitoring tools like Azure Monitor and Prometheus, experience with REST API testing tools such as Postman, and familiarity with Docker/Kubernetes in ML deployments.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an integral part of our team at Proximity, you will be taking on the role of both a hands-on tech lead and product manager. Your primary responsibility will be to deliver data/ML platforms and pipelines within a Databricks-Azure environment. In this capacity, you will be leading a small delivery team and collaborating with enabling teams to drive product, architecture, and data science initiatives. Your ability to translate business requirements into product strategy and technical delivery with a platform-first mindset will be crucial to our success. To excel in this role, you should possess technical proficiency in Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, and Azure. Additionally, expertise in Agile delivery, discovery cycles, outcome-focused planning, and trunk-based development will be advantageous. You should also be adept at collaborating with engineers, working across cross-functional teams, and fostering self-service platforms. Clear communication skills will be key in articulating decisions, roadmap, and priorities effectively. Joining our team comes with a host of benefits. You will have the opportunity to engage in Proximity Talks, where you can interact with fellow designers, engineers, and product enthusiasts, and gain insights from industry experts. Working alongside our world-class team will provide you with continuous learning opportunities, allowing you to challenge yourself and acquire new knowledge on a daily basis. Proximity is a leading technology, design, and consulting partner for prominent Sports, Media, and Entertainment companies globally. With headquarters in San Francisco and additional offices in Palo Alto, Dubai, Mumbai, and Bangalore, we have a track record of creating high-impact, scalable products used by 370 million daily users. The collective net worth of our client companies stands at $45.7 billion since our inception in 2019. At Proximity, we are a diverse team of coders, designers, product managers, and experts dedicated to solving complex problems and developing cutting-edge technology at scale. As our team of Proxonauts continues to expand rapidly, your contributions will play a significant role in the company's success. You will have the opportunity to collaborate with experienced leaders who have spearheaded multiple tech, product, and design teams. To learn more about us, you can watch our CEO, Hardik Jagda, share insights about Proximity, explore our values and meet our team members, visit our website, blog, and design wing at Studio Proximity, and gain behind-the-scenes access through our Instagram accounts @ProxWrks and @H.Jagda.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing architectures for meta-learning, self-reflective agents, and recursive optimization loops. Your role will involve building simulation frameworks for behavior grounded in Bayesian dynamics, attractor theory, and teleo-dynamics. Additionally, you will develop systems that integrate graph rewriting, knowledge representation, and neurosymbolic reasoning. Conducting research on fractal intelligence structures, swarm-based agent coordination, and autopoietic systems will be part of your responsibilities. You are expected to advance Mobius's knowledge graph with ontologies supporting logic, agency, and emergent semantics. Integration of logic into distributed, policy-scoped decision graphs aligned with business and ethical constraints is crucial. Furthermore, publishing cutting-edge results and mentoring contributors in reflective system design and emergent AI theory will be part of your duties. Lastly, building scalable simulations of multi-agent, goal-directed, and adaptive ecosystems within the Mobius runtime is an essential aspect of the role. In terms of qualifications, you should have proven expertise in meta-learning, recursive architectures, and AI safety. Proficiency in distributed systems, multi-agent environments, and decentralized coordination is necessary. Strong implementation skills in Python are required, with additional proficiency in C++, functional, or symbolic languages being a plus. A publication record in areas intersecting AI research, complexity science, and/or emergent systems is also desired. Preferred qualifications include experience with neurosymbolic architectures and hybrid AI systems, fractal modeling, attractor theory, complex adaptive dynamics, topos theory, category theory, logic-based semantics, knowledge ontologies, OWL/RDF, semantic reasoners, autopoiesis, teleo-dynamics, biologically inspired system design, swarm intelligence, self-organizing behavior, emergent coordination, and distributed learning systems. In terms of technical proficiency, you should be proficient in programming languages such as Python (required), C++, Haskell, Lisp, or Prolog (preferred for symbolic reasoning), frameworks like PyTorch and TensorFlow, distributed systems including Ray, Apache Spark, Dask, Kubernetes, knowledge technologies like Neo4j, RDF, OWL, SPARQL, experiment management tools like MLflow, Weights & Biases, and GPU and HPC systems like CUDA, NCCL, Slurm. Familiarity with formal modeling tools like Z3, TLA+, Coq, Isabelle is also beneficial. Your core research domains will include recursive self-improvement and introspective AI, graph theory, graph rewriting, and knowledge graphs, neurosymbolic systems and ontological reasoning, fractal intelligence and dynamic attractor-based learning, Bayesian reasoning under uncertainty and cognitive dynamics, swarm intelligence and decentralized consensus modeling, top os theory, and the abstract structure of logic spaces, autopoietic, self-sustaining system architectures, and teleo-dynamics and goal-driven adaptation in complex systems.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Scientist at Objectways located in Chennai, you will have the opportunity to be part of a team that is driving AI innovation and solving real-world problems by leveraging cutting-edge machine learning and reasoning technologies. Our projects are ambitious and diverse, ranging from agent trajectory prediction to complex reasoning systems, multimodal intelligence, and preference-based learning. We are looking for a talented individual like you who is eager to explore the boundaries of applied AI. In this role, you will be responsible for designing and developing machine learning models for both structured and unstructured data. You will work on agent trajectory prediction, complex reasoning, preference ranking, and reinforcement learning. Additionally, you will handle multimodal datasets and develop reasoning pipelines across text, image, and audio modalities. Your responsibilities will also include validating and optimizing prompts for large language model performance and translating research into scalable, production-level implementations. Collaboration with cross-functional teams such as Engineering, Product, and Research is essential to ensure the success of our projects. To qualify for this position, you should have a minimum of 4 years of hands-on experience in Data Science or Machine Learning roles. Proficiency in Python, PyTorch/TensorFlow, scikit-learn, and ML lifecycle tools is required. You should also demonstrate expertise in at least one of the following areas: trajectory modeling, preference ranking, or multimodal systems. Experience with LLM prompt engineering, complex reasoning algorithms, graph-based methods, and causal inference is highly beneficial. Strong problem-solving, analytical thinking, and communication skills are essential for success in this role. Preferred skills for this position include familiarity with tools like LangChain, Hugging Face, or OpenAI APIs, exposure to RLHF (Reinforcement Learning from Human Feedback) or prompt-tuning, and experience with deploying ML models in production environments using technologies such as Docker and MLflow. By joining Objectways, you will have the opportunity to work on high-impact, next-generation AI challenges, collaborate with top talent from various domains, and enjoy a competitive salary with benefits and a learning budget.,
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Data Scientist/Senior Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. ___________________________________________________________________________________________________________ Department: Data & Analytics Location: Cyber Hub, Gurugram, Haryana (5 days in office) Job Type: Permanent, Full-Time (40 Hours) Reports To: Senior Manager Data Science & Analytics ____________________________________________________________________________________________________________ About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Roles & Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses. Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3 - 4 years for Data Scientist Relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics, etc.) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.)
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - (AI App Ops Lead)! We are looking for a seasoned and hands-on Application Operations Lead to drive the operational excellence of Computer Vision (CV) applications. This individual will lead and mentor a team of ML CV Ops Engineers, ensuring the resilience, scalability, and reliability of AI-powered visual systems deployed in production environments. The ideal candidate will bring a blend of leadership, system operations, ML infrastructure knowledge, and cross-functional collaboration. You will be responsible for orchestrating the delivery, monitoring, and optimization of critical CV models and applications, deployed across cloud and edge environments. Key Responsibilities: Lead and mentor a team of ML CV Ops Engineers to manage the lifecycle of computer vision applications in production. Define and implement operational strategies, workflows, and performance goals for the App Ops team. Foster a DevOps/ MLOps culture of automation, ownership, and continuous improvement. Ensure production CV applications meet high availability, performance, and security standards. Establish SLAs, monitoring policies, and governance frameworks for mission-critical AI systems. Own the incident response process, drive root cause analysis, and coordinate remediation with CV engineering and data science teams. Maintain high observability through monitoring tools (e.g., Prometheus, Grafana, Datadog, AppDynamics). Partner with Data Scientists, MLOps , Cloud Engineers, and Software Developers to ensure smooth model deployments and robust CI/CD pipelines. Act as the technical operations bridge between AI model development and enterprise IT. Champion automation of repetitive support and operational tasks, including model validation, performance regression testing, and retraining triggers. Drive cost and resource optimization for cloud/GPU infrastructure used in CV workloads. Ensure operational practices adhere to audit, security, and regulatory compliance requirements. Maintain operational runbooks, escalation paths, and support documentation for CV systems. Define, Implement, Execute AI App Ops standard work Define KPIs for Support Ops performance and monitor and report on Support Ops KPIs Assist in issue analysis and remediation by developing standard work and investigate, troubleshoot, manage and resolve technical issues Develop code and implement proactive alerting mechanisms Establish and Monitor and act on observability metrics and thresholds Design and Develop proactive alerting mechanisms Create and Implement feedback loop for model observability data Configure and develop scalable pipeline for model integrations Implement observability metrics and thresholds Govern and Support change management processes Assist in knowledge transition and developing training materials Oversee and assist in investigation and resolution of vulnerabilities Transition knowledge from incumbent partner Qualifications we seek in you! Minimum Qualifications Bachelor&rsquos or Master&rsquos degree in Computer Science , Engineering, or a related field. experience in IT Operations, Site Reliability Engineering, or DevOps, with 2+ years in managing ML/AI systems in production. Proven experience in leading technical teams, preferably in ML Ops or platform engineering contexts. Strong understanding of cloud infrastructure (AWS/GCP/Azure), containers (Docker), orchestration (Kubernetes), and CI/CD practices. Experience supporting and optimizing Computer Vision workloads in real-time or batch systems. Familiarity with MLOps platforms and tools (e.g., MLflow , DVC, TensorFlow Serving, TorchServe , Airflow). Preferred Qualifications: Prior experience with model monitoring, drift detection, and retraining automation. Experience working in industries like energy, industrial equipment, manufacturing is a strong plus. Exposure to edge deployment strategies (e.g., NVIDIA Jetson, TensorRT , ONNX optimization). ITIL or SRE certifications are a bonus. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
12.0 - 15.0 years
0 Lacs
Greater Chennai Area
On-site
Overview We are looking for a seasoned Project Manager with 12-15 years of experience in leading end-to-end software development projects . The candidate must have technical exposure to full-stack technologies (React.js, Java, MySQL), deep knowledge of SDLC and Agile methodologies , and experience in delivering enterprise-grade applications . This role requires strong communication with stakeholders, progress tracking, risk mitigation, and leadership reporting. Responsibilities Project Planning & Execution Define project scope, schedule, milestones, and deliverables. Prepare project charters, plans, and WBS (Work Breakdown Structure). Create and manage Agile sprint plans and ensure iteration goals are met. Stakeholder & Team Management Act as a bridge between business, development, QA, and infrastructure teams. Manage internal and external stakeholder expectations. Coordinate with cross-functional teams for on-time and on-budget delivery. Technical Oversight & Risk Management Provide technical input and oversight on architecture and build activities. Track and mitigate technical, resource, and delivery risks proactively. Drive resolution of blockers, dependencies, and escalations. Progress Tracking & Communication Use tools like JIRA or Azure DevOps for project tracking and burndown charts. Generate daily/weekly status reports, dashboards, and executive summaries. Present status updates and delivery health reports to senior management. Quality, Compliance & Governance Ensure QA, UAT, and release processes are followed. Drive process improvement initiatives across the team. Maintain audit trails, change logs, and sign-off documentation. Requirements Primary Skills: Project Management (Agile/Scrum/Waterfall/Hybrid Models) Software Delivery Lifecycle (SDLC) Ownership Working Knowledge of Full Stack Development React.js (Frontend) Java (Backend APIs) MySQL (Database Queries, Data Models) Agile Planning Tools (JIRA, Azure DevOps, Trello, ClickUp) CI/CD Implementation Understanding (Jenkins, GitHub Actions, Azure Pipelines) Resource Planning, Sprint Management, and Backlog Grooming Risk & Issue Management, Change Requests, RCA Documentation Project Tracking, Budgeting & Estimation Stakeholder Communication & Cross-Functional Team Coordination Status Reporting to Senior Leadership and C-level Executives Secondary Skills: Exposure to AI/ML Project Lifecycle & Tools (MLFlow, Vertex AI, Azure ML - conceptual level) Cloud Platform Understanding (Azure, AWS, or GCP) DevOps Awareness (Version Control, Pipelines, Release Cycles) Quality Assurance Coordination & Release Sign-off Processes Team Coaching, Conflict Management & People Leadership Documentation & Process Improvement
Posted 2 weeks ago
10.0 years
0 Lacs
India
On-site
Role Summary We’re looking for a hands-on AI & Data Platform Architect to design and build a scalable, secure, and modular platform that powers AI research in healthcare. You’ll lead the technical architecture for training and deploying models on complex biomedical data, helping shape the future of AI in life sciences. Key Responsibilities Design and build a scalable AI/ML platform to support data pipelines, model training, and deployment. Work with large, diverse biomedical datasets (clinical, genomic, proteomic, chemical). Build secure, cloud-native infrastructure using containers and APIs. Implement and scale foundation models, knowledge graphs, and embeddings. Ensure compliance with security and privacy standards (e.g., HIPAA, SOC2). Collaborate with cross-functional teams (data scientists, clinicians, engineers). Mentor engineers and set best practices for ML platforms and MLOps. Required Skills & Experience Master’s or PhD in Computer Science, AI/ML, or related field. 10+ years in AI platform or infrastructure roles. Strong experience with Python, ML frameworks (PyTorch, TensorFlow), and cloud platforms (AWS, GCP, Azure). Experience with distributed systems, MLOps, and tools like MLFlow, Kubeflow, Databricks. Familiar with GPUs, performance optimization, and data security practices. Nice to Have Background in life sciences or biomedical data (genomics, proteomics, EHRs). Familiarity with drug discovery workflows and generative AI tools like LangChain or Hugging Face. Knowledge of bioinformatics databases and ontologies. What We Offer Chance to shape a cutting-edge AI platform in a mission-driven startup. Equity and growth opportunities in a fast-moving team. Budget for learning and experimentation with AI/cloud tools.
Posted 2 weeks ago
0 years
1 - 6 Lacs
Noida
On-site
Summary: We are seeking a talented and motivated AI Engineer to join our team and focus on building cutting-edge Generative AI applications. The ideal candidate will possess a strong background in data science, machine learning, and deep learning, with specific experience in developing and fine-tuning Large Language Models (LLMs) and Small Language Models (SLMs). You should be comfortable managing the full lifecycle of AI projects, from initial design and data handling to deployment and production monitoring. A foundational understanding of software engineering principles is also required to collaborate effectively with engineering teams and ensure robust deployments. Responsibilities: Design, develop, and implement Generative AI solutions, including applications leveraging Retrieval-Augmented Generation (RAG) techniques. Fine-tune existing Large Language Models (LLMs) and potentially develop smaller, specialized language models (SLMs) for specific tasks. Manage the end-to-end lifecycle of AI model development, including data curation, feature extraction, model training, validation, deployment, and monitoring. Research and experiment with state-of-the-art AI/ML/DL techniques to enhance model performance and capabilities. Build and maintain scalable production pipelines for AI models. Collaborate with data engineering and IT teams to define deployment roadmaps and integrate AI solutions into existing systems. Develop AI-powered tools to solve business problems, such as summarization, chatbots, recommendation systems, or code assistance. Stay updated with the latest advancements in Generative AI, machine learning, and deep learning. Qualifications: Proven experience as a Data Scientist, Machine Learning Engineer, or AI Engineer with a focus on LLMs and Generative AI. Strong experience with Generative AI techniques and frameworks (e.g., RAG, Fine-tuning, Langchain, LlamaIndex, PEFT, LoRA). Solid foundation in machine learning (e.g., Regression, Classification, Clustering, XGBoost, SVM) and deep learning (e.g., ANN, LSTM, RNN, CNN) concepts and applications. Proficiency in Python and relevant libraries (e.g., Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch). Experience with data science principles, including statistics, hypothesis testing, and A/B testing. Experience deploying and managing models in production environments (e.g., using platforms like AWS, Databricks, MLFlow). Familiarity with data handling and processing tools (e.g., SQL, Spark/PySpark). Basic understanding of software engineering practices, including version control (Git) and containerization (Docker). Bachelor's or master’s degree in computer science, Artificial Intelligence, Data Science, or a related quantitative field. Preferred Skills: Experience building RAG-based chatbots or similar applications. Experience developing custom SLMs. Experience with MLOps principles and tools (e.g., MLFlow, Airflow). Experience migrating ML workflows between cloud platforms. Familiarity with vector databases and indexing techniques. Experience with Python web frameworks (e.g., Django, Flask). Experience building and integrating APIs (e.g., RESTful APIs). Basic experience with front-end development or UI building for showcasing AI applications. Qualifications Bachelorʼs or Masterʼs degree in Computer Science, Engineering, or a related discipline.
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a skilled and motivated Senior Systems Engineer with expertise in Data DevOps/MLOps to join our team. The ideal candidate must possess a strong understanding of data engineering, automation for data pipelines, and operationalizing machine learning models. This role requires a collaborative professional capable of building, deploying, and managing scalable data and ML pipelines that meet business objectives. Responsibilities Design, deploy, and manage CI/CD pipelines for data integration and machine learning model deployment Build and maintain infrastructure for data processing and model training using cloud-native tools and services Automate processes for data validation, transformation, and workflow orchestration Coordinate with data scientists, software engineers, and product teams to enable seamless integration of ML models into production Optimize performance and reliability of model serving and monitoring solutions Manage data versioning, lineage tracking, and reproducibility for ML experiments Identify opportunities to enhance scalability, streamline deployment processes, and improve infrastructure resilience Implement security measures to safeguard data integrity and ensure regulatory compliance Diagnose and resolve issues throughout the data and ML pipeline lifecycle Requirements Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field 4+ years of experience in Data DevOps, MLOps, or similar roles Proficiency in cloud platforms like Azure, AWS, or GCP Competency in using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible Expertise in containerization and orchestration technologies like Docker and Kubernetes Background in data processing frameworks such as Apache Spark or Databricks Skills in Python programming, with proficiency in data manipulation and ML libraries like Pandas, TensorFlow, and PyTorch Familiarity with CI/CD tools, including Jenkins, GitLab CI/CD, or GitHub Actions Understanding of version control tools like Git and MLOps platforms such as MLflow or Kubeflow Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana) Strong problem-solving skills and ability to contribute both independently and within a team Excellent communication skills and attention to documentation Nice to have Knowledge of DataOps practices and tools like Airflow or dbt Understanding of data governance concepts and platforms such as Collibra Background in Big Data technologies like Hadoop or Hive Qualifications in cloud platforms or data engineering
Posted 2 weeks ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Change the world. Love your job. Texas Instruments is seeking experienced Data Scientist to join our team. As the Data Scientist in TI's Demand Analytics team, you will play a pivotal role in shaping and executing our demand planning and inventory buffer strategies for the company. You will be working side by side with a team of highly technical professionals that will consists of application developers, system architects, data scientists and data engineers. This role will be responsible for solving complex business problems through innovative solutions that deliver tangible business value. This position requires a technical leader with a strong technical background in AI/ML, simulation solutions, strategic thinking, and a passion for innovation through data.This team is responsible for:portfolio management for demand forecasting algorithms, generation of inventory buffer targets, segmentation of TI's products and simulation/validation frameworks, defining specs/reference architectures to best achieve business outcomes and ensuring security and interoperability between capabilities. Roles And Duties Stakeholder engagement: Work collaboratively and strategically with stakeholder groups to achieve TI business strategy and goals Communicate complex technical concepts and influence final business outcomes with stakeholders effectively Partner with cross-functional teams to identify and prioritize actionable, high-impact insights across a variety of core business areas Technology and platforms: Build simple, scalable and modular technology stacks using modern technologies and software engineering principles. Simulate real world scenarios with various models and approaches to determine best fit of algorithms by varying the inputs across hundreds to thousands of variables Research, experiment and implement new approaches and models that flex with the business strategy transformations Leads data acquisition and engineering efforts Develops and applies machine learning, AI and data engineering framework Solutions, writes and debugs code for complex development projects Oversees, evaluates and determines the best modeling techniques for various scenarios letting the data drive the conversation Qualifications Minimum requirements: MS or PhD in a quantitative field (e.g., Computer Science, Statistics, Engineering, Mathematics) or equivalent practical experience. 8+ years of professional experience in data science or a related role. 5+ years of hands-on experience developing and deploying time series forecasting models in a professional setting. Demonstrated experience in the supply chain domain (e.g., Semiconductor, Retail, CPG, Pharmaceutical), with a deep understanding of concepts like demand forecasting, S&OP, or inventory management. Expert-level proficiency in Python and its core data science libraries (e.g., Pandas, NumPy, Scikit-learn, Statsmodels) and forecasting packages (e.g., Prophet, PyTorch Forecasting). Proven experience taking machine learning models from prototype to production, including knowledge of CI/CD and model monitoring. Preferred Qualifications Experience with MLOps tools and platforms (e.g., MLflow, Kubeflow, Airflow, Docker, Kubernetes). Practical experience with cloud data science platforms (e.g., AWS SageMaker, Azure ML, Google AI Platform). Familiarity with advanced forecasting techniques such as probabilistic forecasting, causal inference, or using Transformers for time series. Experience applying NLP to extract features from unstructured text to enhance forecasting models. Strong SQL skills and experience working with large-scale data warehousing solutions (e.g., Snowflake, BigQuery, Redshift). About Us Why TI? Engineer your future. We empower our employees to truly own their career and development. Come collaborate with some of the smartest people in the world to shape the future of electronics. We're different by design. Diverse backgrounds and perspectives are what push innovation forward and what make TI stronger. We value each and every voice, and look forward to hearing yours. Meet the people of TI Benefits that benefit you. We offer competitive pay and benefits designed to help you and your family live your best life. Your well-being is important to us. About Texas Instruments Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, communications equipment and enterprise systems. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com . Texas Instruments is an equal opportunity employer and supports a diverse, inclusive work environment. If you are interested in this position, please apply to this requisition. About The Team TI does not make recruiting or hiring decisions based on citizenship, immigration status or national origin. However, if TI determines that information access or export control restrictions based upon applicable laws and regulations would prohibit you from working in this position without first obtaining an export license, TI expressly reserves the right not to seek such a license for you and either offer you a different position that does not require an export license or decline to move forward with your employment.
Posted 2 weeks ago
7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Role Overview Intertec Systems is seeking a visionary and execution-focused AI Practice Lead to head our AI initiatives across all lines of business— Digital, Managed Services, Cloud, EAM, Cybersecurity, and Microsoft solutions . This role will own the charter to define, build, and scale innovative AI use cases leveraging Machine Learning (ML), Computer Vision, Agentic AI, and Gen AI applications for: Client solutions (presales, PoCs, proposals) Internal business transformation Product roadmap acceleration The ideal candidate will be an expert hands-on AI engineer who can lead with both strategy and execution. This individual will lead the AI Center of Excellence (CoE) and work closely with Business Units, Technology Teams, and Clients. Key Responsibilities Strategy & Leadership Define and execute the vision and roadmap for AI adoption at Intertec across functions Lead the AI CoE and drive cross-functional collaboration with presales, delivery, product, and marketing teams Evaluate and prioritize AI use cases based on business impact and feasibility Solution Development & Presales Partner with sales and account teams to ideate and propose AI solutions in RFPs and client engagements Build PoCs, reference architectures, and solution accelerators using ML, Computer Vision, NLP, and Generative AI Identify AI opportunities with business outcomes across verticals like Government, Healthcare, Utilities, BFSI, and Real Estate Use Case Design & Development Own delivery of all active and upcoming AI use cases across Intertec’s client and internal platforms Drive implementation of Agentic AI (LLM-based task automation), multi-agent frameworks (e.g., CrewAI, LangGraph), and intelligent workflows Innovation & R&D Evaluate emerging AI technologies, LLMs, vector databases, and open-source models for product or customer requirements Pilot AI use cases internally (finance automation, talent intelligence, project risk prediction, etc.) Scale successfully piloted AI solutions within the organization Keep the organization at the forefront of GenAI, multi-modal AI, and enterprise AI trends Create a catalogue of resuable AI assets to be monetized by the company Governance & Enablement Define best practices, MLOps standards, data governance frameworks Train and upskill teams in AI tooling and architecture Ensure responsible AI practices, ethical design, and regulatory compliance Required Skills & Experience Must-Have Skills 7+ years in technology or consulting, with 4+ years in AI/ML Strong experience in AI presales and solution architecture (client-facing) Expert-level hands-on coding and prototyping experience Hands-on proficiency in: ML frameworks (TensorFlow, PyTorch, scikit-learn) Computer Vision (YOLO, OpenCV, segmentation models) Agentic AI tools (LangChain, CrewAI, AutoGen, RAG, LlamaIndex) Proven success building AI/ML PoCs, solutions, or products from scratch Familiar with vector databases, RAG architecture, prompt engineering, and LLM orchestration Knowledge of MLOps and deployment pipelines (MLflow, Kubeflow, Azure ML, Sagemaker) Excellent communication skills with the ability to influence both technical and non-technical stakeholders Strong understanding of cloud and infrastructure requirements for AI workloads , including: GPU types and capacity planning (e.g., A100, H100, T4) AI-optimized storage and high-throughput systems Deployment environments (Docker, Kubernetes, VM instances) Cost/performance analysis across Azure, AWS, GCP MLOps tools and scalable model deployment architectures Preferred/Bonus Experience leading AI practices in mid-sized or regional IT services companies Exposure to AI compliance and Responsible AI frameworks Tools & Technologies ML & CV : PyTorch, TensorFlow, scikit-learn, OpenCV, YOLO, Detectron2 LLMs & Agents : OpenAI, Anthropic, LangChain, CrewAI, LlamaIndex, Haystack Deployment : Azure AI, AWS Sagemaker, Vertex AI, Docker, Kubernetes Data & Ops : MLflow, Airflow, Hugging Face, Weights & Biases Vector DBs : FAISS, Pinecone, Chroma, Qdrant What You’ll Influence Shape the AI GTM strategy for Intertec Elevate client conversations from solutioning to innovation Help Intertec become an AI-first systems integrator
Posted 2 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Key skills: Python, LLM, and RAG ( Good work voice model) It's an early startup, so 6 days of work Role Overview: We are seeking a highly skilled AI R&D Engineer with expertise in large language models, startup dynamics, and building production-grade AI products. You will play a pivotal role in designing, implementing, and optimizing LLM-driven systems that power our next-generation conversational AI solutions. Key Responsibilities: ● Design, train, fine-tune, and evaluate large language models for diverse conversational AI use cases. ● Stay updated on the latest advancements in LLMs, NLP, and generative AI. ● Propose and implement innovative methods, tools, and frameworks to enhance model performance, efficiency, and reliability. ● Collaborate with product managers, software engineers, and UX designers to seamlessly integrate AI models into user-facing applications. ● Develop APIs, tools, and infrastructure for deploying models in production environments. ● Establish robust evaluation metrics, benchmarks, and automated testing frameworks for conversational systems. ● Continuously monitor model performance and iterate based on user feedback and data-driven insights. ● Guide and mentor junior engineers in applied AI research, engineering best practices, and MLOps. ● Foster a culture of experimentation, excellence, and continuous learning. Qualifications & Experience: Core Requirements: ● At least 2 years of experience in AI/ML R&D, with a proven track record in working with state-of-the-art NLP and LLM technologies (e.g., GPT, LLaMA, Falcon, PaLM). ● Prior experience in fast-paced, high-growth environments with iterative development and evolving priorities. ● Strong programming skills in Python. ● Proficiency with ML frameworks like PyTorch or TensorFlow. ● Experience with cloud platforms (AWS, GCP, Azure), containerization (Docker), and CI/CD pipelines. ● Expertise in fine-tuning LLMs, prompt engineering, and building scalable data pipelines. ● Ability to effectively communicate complex ideas to technical and non-technical stakeholders while thriving in cross-functional teams. Bonus Points: ● Contributions to open-source AI/ML/NLP projects. ● Experience with vector databases, retrieval-augmented generation (RAG), or multimodal models. ● Familiarity with MLOps tools (e.g., MLflow, Kubeflow) and distributed training techniques
Posted 2 weeks ago
2.0 - 3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At Trackier, we're revolutionizing the way businesses measure and optimize their marketing performance. As a leading Marketing Analytics & Attribution platform, we empower advertisers, agencies, and ad networks with powerful, real-time insights to drive growth and maximize ROI. In today's complex digital landscape, understanding every touchpoint of the customer journey is paramount. That's where Trackier comes in. Our robust platform provides comprehensive tracking, detailed analytics, and precise attribution models, ensuring you have a clear picture of what's working and why. From performance marketing campaigns to influencer collaborations and beyond, we give you the tools to make data-driven decisions that propel your business forward. We're passionate about transparency, efficiency, and delivering measurable results. Our commitment to innovation means we're constantly evolving our platform to meet the dynamic needs of the industry, helping our clients achieve their growth ambitions with confidence. Position Summary: We are seeking a driven and analytically-minded AI/ML Engineer with 2-3 years of experience to join our growing team. In this role, you will play a crucial part in the end-to-end lifecycle of our AI solutions, from understanding and preparing complex datasets to developing, deploying, and optimizing robust machine learning models. You will leverage your strong data analysis skills to identify patterns, generate insights, and translate them into effective AI strategies that drive business value. You will be working with LLM integrations where necessary to generate insights from data and enhancing chatbots using RAG and similar tech. Key Responsibilities: Data Understanding & Preparation: Collaborate with data stakeholders to understand business problems and data sources. Perform data loading, cleaning, and preparation, including handling missing values, data type conversions, and ensuring data integrity for large datasets Feature Engineering: Identify, extract, and transform relevant features from raw data to optimize model performance Model Development: Design, develop, train, and evaluate machine learning models (including deep learning, natural language processing,etc., as relevant to our domain) for various applications System Integration: Integrate AI models into existing production systems and applications, ensuring scalability and reliability.Performance Optimization: Continuously monitor, analyze, and improve the performance, accuracy, and efficiency of AI models in production Insight Generation & Communication: Translate complex analytical findings and model outputs into clear, concise, and actionable business insights and recommendations for end users Research & Innovation: Stay abreast of the latest advancements in AI/ML research and actively explore new technologies and methodologies to enhance our capabilities Deployment & MLOps: Contribute to the development and implementation of MLOps practices, including model versioning, CI/CD for ML, and model monitoring Collaboration: Work closely with cross-functional teams, including product managers, software engineers to define requirements and deliver high-quality AI solutions Documentation: Create clear and comprehensive documentation for models, code, and processes Requirements Experience: 2-3 years of professional experience as an AI Engineer, Machine Learning Engineer, or a similar role focused on building and deploying ML solutions Education: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, Electrical Engineering, or a related quantitative field Programming: Strong proficiency in Python and experience with relevant AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Keras) Data Manipulation & Analysis: Demonstrated strong skills in data loading, cleaning, manipulation, and preparation using Pandas and NumPy EDA & Visualization: Proven ability to conduct exploratory data analysis and create effective visualizations using libraries to communicate insights ML Fundamentals: Solid understanding of machine learning principles, algorithms (e.g., supervised, unsupervised, reinforcement learning), and statistical modeling Software Engineering: Strong software engineering fundamentals, including experience with version control (Git), testing, and code review practices Problem Solving: Excellent analytical and problem-solving skills with a keen attention to detail and the ability to derive actionable insights from data Communication: Strong written and verbal communication skills, with the ability to explain complex technical concepts and present data-driven recommendations to both technical and non-technical stakeholders Preferred Qualifications : Experience with cloud platforms (AWS, Azure, GCP) and their AI/ML services Familiarity with containerization technologies (Docker, Kubernetes) Experience with MLOps tools and frameworks (e.g., MLflow, Kubeflow, Sagemaker) Knowledge of distributed computing frameworks (e.g., Spark) Contribution to open-source projects or relevant publications Experience with agile development methodologies Benefits Medical Insurance 5 days working culture Best in industry salary structure Sponsored trips
Posted 2 weeks ago
4.0 years
0 Lacs
Jodhpur, Rajasthan, India
On-site
Job Title: Python Developer Experience: 2–4 Years Location: Onsite, Jodhpur, Rajesthan Type: Full-time No. of Openings: 4 Role Overview :We’re looking for a Python Developer with hands-on experience in ML model development, deployment, and integrating OpenAI APIs. You’ll work with cross-functional teams to build and scale intelligent applications . Key Responsibilitie s:Build and deploy ML models using Pytho n.Integrat e OpenAI AP Is (ChatGPT, Whisper, DALL·E, etc.) into application s.Deploy models vi a Dock er , FastA PI, an d cloud platfor ms (AWS/GCP/Azure ).Set up and maintai n CI/CD pipelin e s.Monitor and troubleshoot deployment and performance issue s. Must-Have Skil ls:Strong in Pyt ho n, Scikit-le ar n, Pan das, e tc.Experience wi th OpenAI API integrat i on.Familiarity wi th model deploym ent tools and workflo ws.Knowledge of CI /C D, Git, a nd RESTful A P Is.Solid problem-solving and debugging skil ls. Good to H ave:MLOps tools (MLflow, Kubef low)Cloud ML services (SageMaker, Vertex AI)
Posted 2 weeks ago
5.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Apexon brings together distinct core competencies – in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences – to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Key Responsibilities: Design, develop, and maintain CI/CD pipelines for ML models and data workflows. Collaborate with data science teams to productionize models using tools like MLflow, Kubeflow, or SageMaker. Automate training, validation, testing, and deployment of machine learning models. Monitor model performance, drift, and retraining needs. Ensure version control of datasets, code, and model artifacts. Implement model governance, audit trails, and reproducibility. Optimize model serving infrastructure (REST APIs, batch/streaming inference). Integrate ML solutions with cloud services (AWS, Azure, GCP). Ensure security, compliance, and reliability of ML systems. Required Skills and Qualifications: Bachelor’s or master’s degree in computer science, Engineering, Data Science, or related field. 5+ years of experience in MLOps, DevOps, or ML engineering roles. Strong experience with ML pipeline tools (MLflow, Kubeflow, TFX, SageMaker Pipelines). Proficiency in containerization and orchestration tools (Docker, Kubernetes, Airflow). Strong Python coding skills and familiarity with ML libraries (scikit-learn, TensorFlow, PyTorch). Experience with cloud platforms (AWS, Azure, GCP) and their ML services. Knowledge of CI/CD tools (GitLab CI/CD, Jenkins, GitHub Actions). Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, Sentry). Understanding of data versioning (DVC, LakeFS) and feature stores (Feast, Tecton). Strong grasp of model testing, validation, and monitoring in production environments. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 o Term Insurance and Accident Insurance o Paid Holidays & Earned Leaves o Paid Parental LeaveoLearning & Career Development o Employee Wellness
Posted 2 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Motadata Motadata is a renowned IT monitoring and management software company that has been transforming how businesses manage their ITOps since its inception. Our vision is to revolutionize the way organizations extract valuable insights from their IT networks. Bootstrapped since inception, Motadata has built up a formidable product suite comprising cutting-edge solutions, empowering enterprises to make informed decisions and optimize their IT infrastructure. As a market leader, we take pride in our ability to collect and analyze data from various sources, in any format, providing a unified view of IT monitoring data. Position Overview: We are seeking a Senior Machine Learning Engineer to join our team, focused on enhancing our AIOps and IT Service Management (ITSM) product through the integration of cutting-edge AI/ML features and functionality. As part of our innovative approach to revolutionizing the IT industry, you will play a pivotal role in leveraging data analysis techniques and advanced machine learning algorithms to drive meaningful insights and optimize our product's performance. With a particular emphasis on end-to-end machine learning lifecycle management and MLOps, you will collaborate with cross-functional teams to develop, deploy, and continuously improve AI-driven solutions tailored to our customers' needs. From semantic search and AI chatbots to root cause analysis based on metrics, logs, and traces, you will have the opportunity to tackle diverse challenges and shape the future of intelligent IT operations. Role & Responsibility: • Lead the end-to-end machine learning lifecycle, understand the business problem statement, convert into ML problem statement, data acquisition, exploration, feature engineering, model selection, training, evaluation, deployment, and monitoring (MLOps). • Should be able to lead the team of ML Engineers to solve the business problem and get it implemented in the product, QA validated and improvise based on the feedback from the customer. • Collaborate with product managers to understand business needs and translate them into technical requirements for AI/ML solutions. • Design, develop, and implement machine learning algorithms and models, including but not limited to statistics, regression, classification, clustering, and transformer-based architectures. • Preprocess and analyze large datasets to extract meaningful insights and prepare data for model training. • Build and optimize machine learning pipelines for model training and inference using relevant frameworks. • Fine-tune existing models and/or train custom models to address specific use cases. • Enhance the accuracy and performance of existing AI/ML models through monitoring, iterative refinement and optimization techniques. Collaborate closely with cross-functional teams to integrate AI/ML features seamlessly into our product, ensuring scalability, reliability, and maintainability. • Document your work clearly and concisely for future reference and knowledge sharing within the team. • Stay ahead of latest developments in machine learning research and technology and evaluate their potential applicability to our product roadmap. Skills and Qualifications: • Bachelor's or higher degree in Computer Science, Engineering, Mathematics, or related field. • Minimum 5+ years of experience as a Machine Learning Engineer or similar role. • Proficiency in data analysis techniques and tools to derive actionable insights from complex datasets. • Solid understanding and practical experience with machine learning algorithms and techniques, including statistics, regression, classification, clustering, and transformer-based models. • Hands-on experience with end-to-end machine learning lifecycle management and MLOps practices. • Proficiency in programming languages such as Python and familiarity with at least one of the following: Java,Golang, .NET, Rust. • Experience with machine learning frameworks/libraries (e.g., TensorFlow, PyTorch, scikit-learn) and MLOps tools (e.g., MLflow, Kubeflow). • Experience with ML.NET and other machine learning frameworks. • Familiarity with natural language processing (NLP) techniques and tools. • Excellent communication and teamwork skills, with the ability to effectively convey complex technical concepts to diverse audiences. • Proven track record of delivering high-quality, scalable machine learning solutions in a production environment.
Posted 2 weeks ago
3.0 years
20 - 25 Lacs
Bengaluru, Karnataka, India
On-site
What You Need To Succeed Master’s degree or equivalent experience in Machine Learning. 3+ years of industry experience in ML, software engineering, and data engineering. Proficiency in Python, PyTorch, TensorFlow, and Scikit-learn. Strong programming skills in Python and JavaScript. Hands-on experience with ML Ops practices. Ability to work with research and product teams. Excellent problem-solving skills and a track record of innovation. Passion for learning and applying the latest technological advancements. Position Overview As a MLE-2, you will design, implement, and optimize AI solutions while ensuring model success. You will lead the ML lifecycle from development to deployment, collaborate with cross-functional teams, and enhance AI capabilities to drive innovation and impact. Key Responsibilities Design and implement AI product features. Maintain and optimize existing AI systems. Train, evaluate, deploy, and monitor ML models. Design ML pipelines for experiment, model, and feature management. Implement A/B testing and scalable model inferencing APIs. Optimize GPU architectures, parallel training, and fine-tune models for improved performance. Deploy LLM solutions tailored to specific use cases. Ensure DevOps and LLMOps best practices using Kubernetes, Docker, and orchestration frameworks. Technical Requirements LLM & ML: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLMOps: MLFlow, Langchain, Langgraph, LangFlow, Langfuse, LlamaIndex, SageMaker, AWS Bedrock, Azure AI Databases: MongoDB, PostgreSQL, Pinecone, ChromDB Cloud: AWS, Azure DevOps: Kubernetes, Docker Languages: Python, SQL, JavaScript Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert What You'll Do Collaborate with cross-functional teams to design and build scalable ML solutions. Implement state-of-the-art ML techniques, including NLP, Generative AI, RAG, and Transformer architectures. Deploy and monitor ML models for high performance and reliability. Innovate through research, staying ahead of industry trends. Build scalable data pipelines following best practices. Present key insights and drive decision-making. Skills: scikit-learn,nlp,generative ai,kubernetes,python,ml ops,pytorch,data engineering,tensorflow,docker,aws,azure,machine learning,ml,javascript
Posted 2 weeks ago
6.0 years
20 - 25 Lacs
Bengaluru, Karnataka, India
Remote
:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: ml ops,software/data engineering,tensorflow,mongodb,llms,docker,machine learning,nlp,computer vision,python,azure,kubernetes,llms and modern nlp techniques,ml, ai,sql,llm technologies,python, pytorch/tensorflow, and scikit-learn,scikit-learn,postgresql,javascript,aws,pytorch
Posted 2 weeks ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Location: Preferred: Ahmedabad, Gandhinagar, Hybrid (Can consider Remote case to case) Department: COE Experience: 8+ Years (with hands-on AI/ML architecture experience) Education: Ph.D. or Master's in Computer Science, Data Science, Artificial Intelligence, or related fields Job Summary: We are seeking an experienced AI/ML Architect with a strong academic background and industry experience to lead the design and implementation of AI/ML solutions across diverse industry domains. The ideal candidate will act as a trusted advisor to clients, understanding their business problems, and crafting scalable AI/ML strategies and solutions aligned to their vision. Key Responsibilities: Engage with enterprise customers and stakeholders to gather business requirements, problem statements, and aspirations. Translate business challenges into scalable and effective AI/ML-driven solutions and architectures. Develop AI/ML adoption strategies tailored to customer maturity, use cases, and ROI potential. Design end-to-end ML pipelines and architecture (data ingestion, processing, model training, deployment, and monitoring). Collaborate with data engineers, scientists, and business SMEs to build and operationalize AI/ML solutions. Present technical and strategic insights to both technical and non-technical audiences, including executives. Lead POCs, pilots, and full-scale implementations. Stay updated on the latest research, technologies, tools, and trends in AI/ML and integrate them into customer solutions. Contribute to proposal development, technical documentation, and pre-sales engagements. Required Qualifications: Ph.D. or Master’s degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or related field. 8+ years of experience in the AI/ML field, with a strong background in solution architecture. Deep knowledge of machine learning algorithms, NLP, computer vision, deep learning frameworks (TensorFlow, PyTorch, etc.). Experience with cloud AI/ML services (AWS SageMaker, Azure ML, GCP Vertex AI, etc.). Strong communication and stakeholder management skills. Proven track record of working directly with clients to understand business needs and deliver AI solutions. Familiarity with MLOps practices and tools (Kubeflow, MLflow, Airflow, etc.). Preferred Skills: Experience in building GenAI or Agentic AI applications. Knowledge of data governance, ethics in AI, and explainable AI. Ability to lead cross-functional teams and mentor junior data scientists/engineers. Publications or contributions to AI/ML research communities (preferred but not mandatory).
Posted 2 weeks ago
4.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in data science and machine learning engineering at PwC will focus on leveraging advanced analytics and machine learning techniques to extract insights from large datasets and drive data-driven decision making. You will work on developing predictive models, conducting statistical analysis, and creating data visualisations to solve complex business problems. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the Data Science team you will design and deliver scalable AI applications that drive business transformation. As a Senior Associate you will analyze complex problems, mentor junior team members, and build meaningful client connections while navigating the evolving landscape of AI and machine learning. This role offers the chance to work on innovative technologies, collaborate with cross-functional teams, and contribute to creative solutions that shape the future of the industry. Responsibilities Design and implement scalable AI applications to facilitate business transformation Analyze intricate problems and propose practical solutions Mentor junior team members to enhance their skills and knowledge Establish and nurture meaningful relationships with clients Navigate the dynamic landscape of AI and machine learning Collaborate with cross-functional teams to drive innovative solutions Utilize advanced technologies to improve project outcomes Contribute to the overall strategy of the Data Science team What You Must Have Bachelor's Degree in Computer Science, Engineering, or equivalent technical discipline 4-9 years of experience in Data Science/ML/AI roles Oral and written proficiency in English required What Sets You Apart Proficiency in Python and data science libraries Hands-on experience with Generative AI and prompt engineering Familiarity with cloud platforms like Azure, AWS, GCP Understanding of production-level AI systems and CI/CD Experience with Docker, Kubernetes for ML workloads Knowledge of MLOps tooling and pipelines Demonstrated track record of delivering AI-driven solutions Preferred Knowledge/Skills Please reference About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence.ill categories for job description details. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 4 to 9 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI, including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems.
Posted 2 weeks ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking a skilled and innovative Machine Learning Engineer with over 5 years of experience to design, develop, and deploy scalable ML solutions. You will work closely with data scientists, software engineers, and product teams to solve real-world problems using state-of-the-art machine learning and deep learning techniques. Key Responsibilities Design, build, and optimize machine learning models and pipelines for classification, regression, clustering, recommendation, and forecasting Implement and fine-tune deep learning models using frameworks like TensorFlow, PyTorch, or Keras Collaborate with cross-functional teams to understand business problems and convert them into technical solutions Develop data preprocessing, feature engineering, and model evaluation strategies Build and deploy models into production using CI/CD practices and MLOps tools Monitor model performance and retrain as necessary to ensure accuracy and reliability Create and maintain technical documentation, and provide knowledge sharing within the team Stay updated on the latest research, tools, and techniques in machine learning and AI Required Skills & Experience 5+ years of experience in Machine Learning Engineering or Applied Data Science Proficiency in Python and ML libraries such as scikit-learn, pandas, NumPy, TensorFlow, or PyTorch Solid understanding of mathematics, statistics, and ML/DL algorithms Experience with end-to-end ML lifecycle from data collection and cleaning to model deployment and monitoring Strong knowledge of SQL and working with large datasets Experience deploying ML models on cloud platforms (e.g., AWS, Azure, GCP) Familiarity with Docker, Kubernetes, MLflow, or other MLOps tools Good understanding of REST APIs, microservices, and backend integration Nice To Have Exposure to NLP, Computer Vision, or Generative AI techniques Experience with big data technologies like Spark, Hadoop, or Hive Working knowledge of data labeling, AutoML, or active learning Experience with feature stores, model registries, or streaming data (Kafka, Flink) Educational Qualification Bachelors or Masters degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field Additional certifications in AI/ML are a plus (ref:hirist.tech)
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Role : AI Engineer. Experience : 3 to 6 years. Work Mode : WFO / Hybrid /Remote if applicable. Immediate Joiners. Job Description An ideal candidate will have experience, as we are building an AI-powered workforce intelligence platform that helps businesses optimize talent strategies, enhance decision making, and drive operational efficiency. Our software leverages cutting-edge AI, NLP, and data science to extract meaningful insights from vast amounts of structured and unstructured workforce data. As part of our new AI team, you will have the opportunity to work on real-world AI applications, contribute to innovative NLP solutions, and gain hands on experience in building AI-driven products from the ground up. Required Skills & Qualification Strong experience in Python programming. 3 + years of experience in Data Science/NLP (Freshers with strong NLP projects are welcome). Proficiency in Python, PyTorch, Scikit-learn, and NLP libraries (NLTK, Hugging Face). Basic knowledge of cloud platforms (AWS, GCP, or Azure). Familiarity with MLOps tools like Airflow, MLflow, or similar. Experience with Big Data processing (Spark, Pandas, or Dask). Experience with SQL for data manipulation and analysis. Assist in designing, training, and optimizing ML/NLP models using PyTorch, NLTK, Scikitlearn, and Transformer models (BERT,GPT,etc.) Experience with GenAI tech stacks including foundational models (GPT-4, Claude, Gemini), frameworks (LangChain, LlamaIndex), and deployment tools (Hugging Face, AWS Bedrock, Vertex AI, vector DBs like FAISS/Pinecone). Help deploy AI/ML solutions on AWS, GCP, or Azure. Collaborate with engineers to integrate AI models into production systems. Expertise in using SQL and Python to clean, preprocess, and analyze large datasets. Learn & Innovate Stay updated with the latest advancements in NLP, AI, and ML frameworks. Strong analytical and problem-solving skills. Willingness to learn, experiment, and take ownership in a fast-paced startup environment. Nice To Have Requirements For The Candidate Desire to grow within the company. Team player and Quicker learner. Performance-driven. Strong networking and outreach skills. Exploring aptitude & killer attitude. Ability to communicate and collaborate with the team at ease. Drive to get the results and not let anything get in your way. Critical and analytical thinking skills, with a keen attention to detail. Demonstrate ownership and strive for excellence in everything you do. Demonstrate a high level of curiosity and keep abreast of the latest technologies & tools. Ability to pick up new software easily and represent yourself peers and co-ordinate during meetings with Customers. What We Offer We offer a market-leading salary along with a comprehensive benefits package to support your well-being. Enjoy a hybrid or remote work setup that prioritizes work-life balance and personal wellbeing. We invest in your career through continuous learning and internal growth opportunities. Be part of a dynamic, inclusive, and vibrant workplace where your contributions are recognized and rewarded. We believe in straightforward policies, open communication, and a supportive work environment where everyone thrives. (ref:hirist.tech)
Posted 2 weeks ago
50.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us At Digilytics, we build and deliver easy to use AI products to the secured lending and consumer industry sectors. In an ever-crowded world of clever technology solutions looking for a problem to solve, our solutions start with a keen understanding of what creates and what destroys value in our clients business. Founded by Arindom Basu, the leadership of Digilytics is deeply rooted in leveraging disruptive technology to drive profitable business growth. With over 50 years of combined experience in technology-enabled change, the Digilytics leadership is focused on building a values-first firm that will stand the test of time. We are currently focussed on developing a product, Digilytics RevEL, to revolutionise loan origination for secured lending covering mortgages, motor and business lending. The product leverages the latest AI techniques to process loan application and loan documents to deliver improved customer and colleague experience, while improving productivity and throughput and reducing processing costs. About The Role Digilytics is pioneering the development of intelligent mortgage solutions in International and Indian markets. We are looking for Data Scientist who has strong NLP and computer vision expertise. We are looking for experienced data scientists, who have the aspirations and appetite for working in a start-up environment, and with relevant industry experience to make a significant contribution to our DigilyticsTM platform and solutions. Primary focus would be to apply machine learning techniques for data extraction from documents from variety of formats including scans and handwritten documents. Responsibilities Develop a learning model for high accuracy extraction and validation of documents, e.g. in mortgage industry Work with state-of-the-art language modelling approaches such as transformer-based architectures while integrating capabilities across NLP, computer vision, and machine learning to build robust multi-modal AI solutions Understand the DigilyticsTM vision and help in creating and maintaining a development roadmap Interact with clients and other team members to understand client-specific requirements of the platform Contribute to platform development team and deliver platform releases in a timely manner Liaise with multiple stakeholders and coordinate with our onshore and offshore entities Evaluate and compile the required training datasets from internal and public sources and contribute to the data pre-processing phase. Expected And Desired Skills Either of the following Deep learning frameworks PyTorch (preferred) or Tensorflow Good understanding in designing, developing, and optimizing Large Language Models (LLMs), with hands-on experience in leveraging cutting-edge advancements in NLP and generative AI Skilled in customizing LLMs for domain-specific applications through advanced fine-tuning, prompt engineering, and optimization strategies such as LoRA, quantization, and distillation. Knowledge of model versioning, serving, and monitoring using tools like MLflow, FastAPI, Docker, vLLM. Python used for analytics applications including data pre-processing, EDA, statistical analysis, machine learning model performance evaluation and benchmarking Good scripting and programming skills to integrate with other external applications Good interpersonal skills and the ability to communicate and explain models Ability to work in unfamiliar business areas and to use your skills to create solutions Ability to both work in and lead a team and to deliver and accept peer review Flexible approach to working environment and hours Experience Between 4-6 years of relevant experience Hands-on experience with Python and/or R Machine Learning Deep Learning (desirable) End to End development of a Deep Learning based model covering model selection, data preparation, training, hyper-parameter optimization, evaluation, and performance reporting. Proven experience working in both smaller and larger organisations having multicultural exposure Domain and industry experience by serving customers in one or more of these industries - Financial Services, Professional Services, other Consumer Industries Education Background A Bachelors degree in the fields of study such as Computer Science, Mathematics, Statistics, and Data Science with strong programming content from a leading institute An advanced degree such as a Master's or PhD is an advantage (ref:hirist.tech)
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
The ideal candidate for this position in Ahmedabad should be a graduate with at least 3 years of experience. At Bytes Technolab, we strive to create a cutting-edge workplace infrastructure that empowers our employees and clients. Our focus on utilizing the latest technologies enables our development team to deliver high-quality software solutions for a variety of businesses. You will be responsible for leveraging your 3+ years of experience in Machine Learning and Artificial Intelligence to contribute to our projects. Proficiency in Python programming and relevant libraries such as NumPy, Pandas, and scikit-learn is essential. Hands-on experience with frameworks like PyTorch, TensorFlow, Keras, Facenet, and OpenCV will be key in your role. Your role will involve working with GPU acceleration for deep learning model development using CUDA, cuDNN. A strong understanding of neural networks, computer vision, and other AI technologies will be crucial. Experience with Large Language Models (LLMs) like GPT, BERT, LLaMA, and familiarity with frameworks such as LangChain, AutoGPT, and BabyAGI are preferred. You should be able to translate business requirements into ML/AI solutions and deploy models on cloud platforms like AWS SageMaker, Azure ML, and Google AI Platform. Proficiency in ETL pipelines, data preprocessing, and feature engineering is required, along with experience in MLOps tools like MLflow, Kubeflow, or TensorFlow Extended (TFX). Expertise in optimizing ML/AI models for performance and scalability across different hardware architectures is necessary. Knowledge of Natural Language Processing (NLP), Reinforcement Learning, and data versioning tools like DVC or Delta Lake is a plus. Skills in containerization tools like Docker and orchestration tools like Kubernetes will be beneficial for scalable deployments. You should have experience in model evaluation, A/B testing, and establishing continuous training pipelines. Working in Agile/Scrum environments with cross-functional teams, understanding ethical AI principles, model fairness, and bias mitigation techniques are important. Familiarity with CI/CD pipelines for machine learning workflows and the ability to communicate complex concepts to technical and non-technical stakeholders will be valuable.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France