Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
8 - 13 Lacs
Pune
Work from Office
Youll make a difference by: Siemens is seeking a visionary and technically strong Lead AI/ML Engineer to spearhead the development of intelligent systems that power the future of sustainable and connected transportation. This role will lead the design and deployment of AI/ML solutions across domains such as efficiency improvements in software development process, predictive maintenance, traffic analytics, computer vision for rail safety, and intelligent automation in rolling stock and rail infrastructure. Key Responsibilities Lead the end-to-end lifecycle of AI/ML projectsfrom data acquisition and model development to deployment and monitoringwithin the context of mobility systems. Architect scalable ML pipelines that integrate with Siemens Mobility platforms and other edge/cloud-based systems. Collaborate with multi-functional teams including domain experts, software architects, and system engineers to translate mobility use cases into AI-driven solutions. Mentor junior engineers and data scientists, and foster a culture of innovation, quality, and continuous improvement. Evaluate and integrate innovative research in AI/ML, including generative AI, computer vision, and time-series forecasting, into real-world applications. Ensure compliance with Siemens AI ethics, cybersecurity, and data governance standards. Required Qualifications Bachelor's or Masters or PhD in Computer Science, Machine Learning, Data Science, or a related field. 7+ years of experience in AI/ML engineering, with at least 2 years in a technical leadership role. Strong programming skills in Python and experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Proven experience deploying ML models in production, preferably in industrial or mobility environments. Familiarity with MLOps tools (e.g., MLflow, Kubeflow) and cloud platforms (Azure, AWS, or GCP). Solid understanding of data engineering, model versioning, and CI/CD for ML. Preferred Qualifications Experience in transportation, automotive, or industrial automation domains. Knowledge of edge AI deployment, sensor fusion, or real-time analytics. Contributions to open-source AI/ML projects or published research. What We Offer Opportunity to shape the future of mobility through AI innovation. Access to Siemens global network of experts, labs, and digital platforms, flexible work arrangements, and continuous learning opportunities. A mission-driven environment focused on sustainability, safety, and digital transformation. Desired Skills: 9+ years of experience is required. Great Communication skills. Analytical and problem-solving skills
Posted 1 month ago
0.0 - 2.0 years
0 Lacs
Gurugram, Haryana
On-site
Position : AI / ML Engineer Job Type : Full-Time Location : Gurgaon, Haryana, India Experience : 2 Years Industry : Information Technology Domain : Demand Forecasting in Retail/Manufacturing Job Summary We are seeking a skilled Time Series Forecasting Engineer to enhance existing Python microservices into a modular, scalable forecasting engine. The ideal candidate will have a strong statistical background, expertise in handling multi-seasonal and intermittent data, and a passion for model interpretability and real-time insights. Key Responsibilities Develop and integrate advanced time-series models: MSTL, Croston, TSB, Box-Cox. Implement rolling-origin cross-validation and hyperparameter tuning. Blend models such as ARIMA, Prophet, and XGBoost for improved accuracy. Generate SHAP-based driver insights and deliver them to a React dashboard via GraphQL. Monitor forecast performance with Prometheus and Grafana; trigger alerts based on degradation. Core Technical Skills Languages : Python (pandas, statsmodels, scikit-learn) Time Series : ARIMA, MSTL, Croston, Prophet, TSB Tools : Docker, REST API, GraphQL, Git-flow, Unit Testing Database : PostgreSQL Monitoring : Prometheus, Grafana Nice-to-Have : MLflow, ONNX, TensorFlow Probability Soft Skills Strong communication and collaboration skills Ability to explain statistical models in layman terms Proactive problem-solving attitude Comfort working cross-functionally in iterative development environments Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Application Question(s): Do you have at least 2 years of hands-on experience in Python-based time series forecasting? Have you worked in retail or manufacturing domains where demand forecasting was a core responsibility? Are you currently authorized to work in India without sponsorship? Have you implemented or used ARIMA, Prophet, or MSTL in any of your projects? Have you used Croston or TSB models for forecasting intermittent demand? Are you familiar with SHAP for model interpretability? Have you containerized a forecasting pipeline using Docker and exposed it through a REST or GraphQL API? Have you used Prometheus and Grafana to monitor model performance in production? Work Location: In person Application Deadline: 05/06/2025 Expected Start Date: 05/06/2025
Posted 1 month ago
0.0 - 8.0 years
0 Lacs
Delhi
On-site
Job Title: Software Engineer – AI/ML Location: Delhi Experience: 4-8 years About the Role: We are seeking a highly experienced and innovative AI & ML engineer to lead the design, development, and deployment of advanced AI/ML solutions, including Large Language Models (LLMs), for enterprise-grade applications. You will work closely with cross-functional teams to drive AI strategy, define architecture, and ensure scalable and efficient implementation of intelligent systems. Key Responsibilities: Design and architect end-to-end AI/ML solutions including data pipelines, model development, training, and deployment. Develop and implement ML models for classification, regression, NLP, computer vision, and recommendation systems. Build, fine-tune, and integrate Large Language Models (LLMs) such as GPT, BERT, LLaMA, etc., into enterprise applications. Evaluate and select appropriate frameworks, tools, and technologies for AI/ML projects. Lead AI experimentation, proof-of-concepts (PoCs), and model performance evaluations. Collaborate with data engineers, product managers, and software developers to integrate models into production environments. Ensure robust MLOps practices, version control, reproducibility, and model monitoring. Stay up to date with advancements in AI/ML, especially in generative AI and LLMs, and apply them innovatively. Requirements : Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field. Min 4+ years of experience in AI/ML. Deep understanding of machine learning algorithms, neural networks, and deep learning architectures. Proven experience working with LLMs, transformer models, and prompt engineering. Hands-on experience with ML frameworks such as TensorFlow, PyTorch, Hugging Face, LangChain, etc. Proficiency in Python and experience with cloud platforms (AWS, Azure, or GCP) for ML workloads. Strong knowledge of MLOps tools (MLflow, Kubeflow, SageMaker, etc.) and practices. Excellent problem-solving and communication skills. Preferred Qualifications: Experience with vector databases (e.g., Pinecone, FAISS, Weaviate) and embeddings. Exposure to real-time AI systems, streaming data, or edge AI. Contributions to AI research, open-source projects, or publications in AI/ML. Interested ones, kindly apply here or share resume at hr@softprodigy.com
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Who We Are Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. Who You Are A passionate and skilled Python, AI/ML Engineers with 3-5 years of experience. You will work on cutting-edge projects involving Generative AI, machine learning, and scalable systems, helping to build intelligent solutions that deliver real business value. If you thrive in a fast-paced environment and love solving complex problems using data and intelligent algorithms, we’d love to hear from you. What You’ll Do Design, develop, and deploy machine learning models and Generative AI solutions. Work on end-to-end ML pipelines, from data ingestion and preprocessing to model deployment and monitoring. Collaborate with cross-functional teams to understand requirements and deliver AI-driven features. Build robust, scalable, and well-documented Python-based APIs for ML services. Optimize database interactions and ensure efficient data storage and retrieval for AI applications. Stay updated with the latest trends in AI/ML and integrate innovative approaches into projects. What You’ll Need Python – Strong hands-on experience. Machine Learning – Practical knowledge of supervised, unsupervised, and deep learning techniques. Generative AI – Experience working with LLMs or similar GenAI technologies. API Development – RESTful APIs and integration of ML models into production services. Databases – Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, etc.). Good To Have Cloud Platforms – Familiarity with AWS, Azure, or Google Cloud Platform (GCP). TypeScript/JavaScript – Frontend or full-stack exposure for ML product interfaces. Experience with MLOps tools and practices (e.g., MLflow, Kubeflow, etc.) Exposure to containerization (Docker) and orchestration (Kubernetes). WHAT’S IN IT FOR YOU? At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability Show more Show less
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Senior Machine Learning Engineer at Gojek, you will be at the forefront of applying machine learning to drive strategic and operational improvements. You will lead the development of scalable ML solutions, mentor junior engineers, and collaborate with cross-functional teams to build and deploy models that enhance our service offerings and improve operational efficiency. What You Will Do Design, develop, and deploy machine learning models to solve complex business problems and predict user behaviour. Optimise and scale real-time ML models for production environments, ensuring high availability and low latency. Architect and implement scalable and efficient ML pipelines and infrastructure. Collaborate with data scientists, engineers, and product teams to integrate ML models into production systems. Design and execute experimentation frameworks (A/B testing, multi-armed bandits, etc.) to evaluate model performance and improve decision-making. Continuously monitor and improve model performance in production, addressing concept drift, latency, and reliability challenges. Research and implement best practices in MLOps, automation, and deployment. Develop and deploy Generative AI applications, integrating LLMs and other GenAI models into production workflows. What You Will Need 4 years of hands-on experience in Machine Learning and MLOps. Proven experience in deploying and maintaining real-time machine learning models in high-traffic consumer applications. Strong understanding of system design, distributed systems, and cloud-based ML infrastructure. Proficiency in Python, TensorFlow/PyTorch, and ML frameworks. Experience with experimentation design, A/B testing, and statistical evaluation of ML models. Knowledge of feature stores, model monitoring, and automated retraining workflows. Familiarity with ML lifecycle management tools (MLflow, Kubeflow, TFX, etc.). Strong problem-solving skills and ability to work in a fast-paced environment. Our Data Science team currently consists of 40+ people based in India, Indonesia and Singapore who run Southeast Asia’s leading Gojek business. We oversee all things data and work to become a thought partner for our Business Users, Product Team, and Decision Makers. It’s our job to ensure that they have a structural approach to data-driven problem-solving. Right now, our focus revolves: how to make customers, drivers, and merchants happy and delighted. We have so far created millions of dollar impact across different journeys of customers, drivers and merchants We work with the Engineering, PMs and strategy functions hand-in-glove - be it constructing a new product or brainstorming on a problem like how do we reduce the wait time for the drive, how do we improve assortment, should we treat convenience seeking customer differently from value seeking customer etc As a team, we’re concerned not only with the growth of the company, but each other’s personal and professional growths, too. Along with us coming from diverse backgrounds, we often have fun sessions to talk about everything and anything from data information to our current movie list. About GoTo Group GoTo Group is the largest digital ecosystem in Indonesia with its mission to “Empower Progress’ by offering technological infrastructure and solutions for everyone to access and thrive in the digital economy. The GoTo ecosystem consists of on-demand transportation services, food and grocery delivery, logistics and fulfillment, as well as financial and payment services through the Gojek and GoTo Financial platforms.It is the first platform in Southeast Asia that hosts these crucial cases in a single ecosystem, capturing the majority of Indonesia’s vast consumer household. About Gojek Gojek is Southeast Asia’s leading on-demand platform and pioneer of the multi-service ecosystem with over 2.5 million driver partners across the regions offering a wide range of services such as transportation, food delivery, logistics and more. With its mission to create impact at scale, Gojek is committed to resolving consumer problems and raising standards of living by connecting consumers to the best providers of goods and services in the market. About GoTo Financial GoTo Financial accelerates financial inclusion through its leading financial services and merchants solutions. Its consumer services include GoPay and GoPayLater and serve businesses of all sizes through Midtrans, Moka, GoBiz Plus, GoBiz, and Selly. With its trusted and inclusive ecosystem of products, GoTo Financial is open to new growth opportunities and aims to empower everyone to Make It Happen, Make It Together, Make It Last. GoTo and its business units, including Gojek and GoToFinancial ("GoTo") only post job opportunities on our official channels on our respective company websites and on LinkedIn. GoTo is not liable for any job postings or job offers that did not originate from us. You should conduct your own due diligence to prevent being victims of any fake job scams, if they did not originate from GoTo's official recruitment channels. Show more Show less
Posted 1 month ago
8 - 18 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from TCS!! TCS is Hiring for Data Architect Interview Mode: Virtual Required Experience: 8-18 years Work location: Chennai, Kolkata, Hyderabad Data Architect (Azure/AWS) Hands on Experience in ADF, HDInsight, Azure SQL, Pyspark, python, MS Fabric, data mesh Good to have - Spark SQL, Spark Streaming, Kafka Hands on exp in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena Good To Have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification: Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract ) : Show more Show less
Posted 1 month ago
0 years
0 Lacs
Vapi, Gujarat, India
On-site
Job Title: AI Lead Engineer Location: Vapi, Gujarat Experience Required: 5+ Years Working Days: 6 Days a Week (Monday–Saturday) Industry Exposure: Manufacturing, Retail, Finance, Healthcare, life sciences or related field. Job Description: We are seeking a highly skilled and hands-on AI Lead to join our team in Vapi . The ideal candidate will have a proven track record of developing and deploying machine learning systems in real-world environments, along with the ability to lead AI projects from concept to production. You will work closely with business and technical stakeholders to drive innovation, optimize operations, and implement intelligent automation solutions. Key Responsibilities: Lead the design, development, and deployment of AI/ML models for business-critical applications. Build and implement computer vision systems (e.g., defect detection, image recognition) using frameworks like OpenCV and YOLO. Develop predictive analytics models (e.g., predictive maintenance, forecasting) using time series and machine learning algorithms such as XGBoost. Build and deploy recommendation engines and optimization models to improve operational efficiency. Establish and maintain robust MLOps pipelines using tools such as MLflow, Docker, and Jenkins. Collaborate with stakeholders across business and IT to define KPIs and deliver AI solutions aligned with organizational objectives. Integrate AI models into existing ERP or production systems using REST APIs and microservices. Mentor and guide a team of junior ML engineers and data scientists. Required Skills & Technologies: Programming Languages: Python (advanced), SQL, Bash, Java (basic) ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost DevOps & MLOps Tools: Docker, FastAPI, MLflow, Jenkins, Git Data Engineering & Visualization: Pandas, Spark, Airflow, Tableau Cloud Platforms: AWS (S3, EC2, SageMaker – basic) Specializations: Computer Vision (YOLOv8, OpenCV), NLP (spaCy, Transformers), Time Series Analysis Deployment: ONNX, REST APIs, ERP System Integration Qualifications: B.Tech / M.Tech / M.Sc in Computer Science, Data Science, or related field. 6+ years of experience in AI/ML with a strong focus on product-ready deployments. Demonstrated experience leading AI/ML teams or projects. Strong problem-solving skills and the ability to communicate effectively with cross-functional teams. Domain experience in manufacturing, retail, or healthcare preferred. What We Offer: A leadership role in an innovation-driven team Exposure to end-to-end AI product development in a dynamic industry environment Opportunities to lead, innovate, and mentor Competitive salary and benefits package 6-day work culture supporting growth and accountability This is a startup environment but with good reputable company, We are looking for someone who can work Monday to Saturday and who can lead a team and generate new solutions and ideas and lead / manage the project effectively-- Please fill this given below form before applying https://forms.gle/8b3gdxzvc2JwnYfZ6 Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description: As a Senior Data and Applied Scientist, you will work with Pattern's Data Science team to curate and analyze data and apply machine learning models and statistical techniques to optimize advertising spend on ecommerce platforms. What you’ll do: Design, build, and maintain machine learning and statistical models to optimize advertising campaigns to improve search visibility and conversion rates on ecommerce platforms. Continuously optimize the quality of our machine learning models, especially for key metrics like search ranking, keyword bidding, CTR and conversion rate estimation Conduct research to integrate new data sources, innovate in feature engineering, fine-tuning algorithms, and enhance data pipelines for robust model performance. Analyze large datasets to extract actionable insights that guide advertising decisions. Work closely with teams across different regions (US and India), ensuring seamless collaboration and knowledge sharing. Dedicate 20% of time to MLOps for efficient, reliable model deployment and operations. What we’re looking for: Bachelor's or Master's in Data Science, Computer Science, Statistics, or a related field. 3-6 years of industry experience in building and deploying machine learning solutions. Strong data manipulation and programming skills in Python and SQL and hands-on experience with libraries such as Pandas, Numpy, Scikit-Learn, XGBoost. Strong problem-solving skills and an ability to analyze complex data. In depth expertise in a range of machine learning and statistical techniques such as linear and tree-based models along with understanding of model evaluation metrics. Experience with Git, AWS, Docker, and MLFlow is advantageous. Additional Pluses: Portfolio: An active Kaggle or Github profile showcasing relevant projects. Domain Knowledge: Familiarity with advertising and ecommerce concepts, which would help in tailoring models to business needs. Pattern is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
On-site
Flexera saves customers billions of dollars in wasted technology spend. A pioneer in Hybrid ITAM and FinOps, Flexera provides award-winning, data-oriented SaaS solutions for technology value optimization (TVO), enabling IT, finance, procurement and cloud teams to gain deep insights into cost optimization, compliance and risks for each business service. Flexera One solutions are built on a set of definitive customer, supplier and industry data, powered by our Technology Intelligence Platform, that enables organizations to visualize their Enterprise Technology Blueprint™ in hybrid environments—from on-premises to SaaS to containers to cloud. We’re transforming the software industry. We’re Flexera. With more than 50,000 customers across the world, we’re achieving that goal. But we know we can’t do any of that without our team. Ready to help us re-imagine the industry during a time of substantial growth and ambitious plans? Come and see why we’re consistently recognized by Gartner, Forrester and IDC as a category leader in the marketplace. Learn more at flexera.com Job Summary: We are seeking a skilled and motivated Senior Data Engineer to join our Automation, AI/ML team. In this role, you will work on designing, building, and maintaining data pipelines and infrastructure to support AI/ML initiatives, while contributing to the automation of key processes. This position requires expertise in data engineering, cloud technologies, and database systems, with a strong emphasis on scalability, performance, and innovation. Key Responsibilities: Identify and automate manual processes to improve efficiency and reduce operational overhead. Design, develop, and optimize scalable data pipelines to integrate data from multiple sources, including Oracle and SQL Server databases. Collaborate with data scientists and AI/ML engineers to ensure efficient access to high-quality data for training and inference models. Implement automation solutions for data ingestion, processing, and integration using modern tools and frameworks. Monitor, troubleshoot, and enhance data workflows to ensure performance, reliability, and scalability. Apply advanced data transformation techniques, including ETL/ELT processes, to prepare data for AI/ML use cases. Develop solutions to optimize storage and compute costs while ensuring data security and compliance. Required Skills and Qualifications: Experience in identifying, streamlining, and automating repetitive or manual processes. Proven experience as a Data Engineer, working with large-scale database systems (e.g., Oracle, SQL Server) and cloud platforms (AWS, Azure, Google Cloud). Expertise in building and maintaining data pipelines using tools like Apache Airflow, Talend, or Azure Data Factory. Strong programming skills in Python, Scala, or Java for data processing and automation tasks. Experience with data warehousing technologies such as Snowflake, Redshift, or Azure Synapse. Proficiency in SQL for data extraction, transformation, and analysis. Familiarity with tools such as Databricks, MLflow, or H2O.ai for integrating data engineering with AI/ML workflows. Experience with DevOps practices and tools, such as Jenkins, GitLab CI/CD, Docker, and Kubernetes. Knowledge of AI/ML concepts and their integration into data workflows. Strong problem-solving skills and attention to detail. Preferred Qualifications: Knowledge of security best practices, including data encryption and access control. Familiarity with big data technologies like Hadoop, Spark, or Kafka. Exposure to Databricks for data engineering and advanced analytics workflows. Flexera is proud to be an equal opportunity employer. Qualified applicants will be considered for open roles regardless of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by local/national laws, policies and/or regulations. Flexera understands the value that results from employing a diverse, equitable, and inclusive workforce. We recognize that equity necessitates acknowledging past exclusion and that inclusion requires intentional effort. Our DEI (Diversity, Equity, and Inclusion) council is the driving force behind our commitment to championing policies and practices that foster a welcoming environment for all. We encourage candidates requiring accommodations to please let us know by emailing careers@flexera.com. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Work closely with clients to understand their business requirements and design data solutions that meet their needs. Develop and implement end-to-end data solutions that include data ingestion, data storage, data processing, and data visualization components. Design and implement data architectures that are scalable, secure, and compliant with industry standards. Work with data engineers, data analysts, and other stakeholders to ensure the successful delivery of data solutions. Participate in presales activities, including solution design, proposal creation, and client presentations. Act as a technical liaison between the client and our internal teams, providing technical guidance and expertise throughout the project lifecycle. Stay up-to-date with industry trends and emerging technologies related to data architecture and engineering. Develop and maintain relationships with clients to ensure their ongoing satisfaction and identify opportunities for additional business. Understands Entire End to End AI Life Cycle starting from Ingestion to Inferencing along with Operations. Exposure to Gen AI Emerging technologies. Exposure to Kubernetes Platform and hands on deploying and containorizing Applications. Good Knowledge on Data Governance, data warehousing and data modelling. Requirements Bachelor's or Master's degree in Computer Science, Data Science, or related field. 10+ years of experience as a Data Solution Architect, with a proven track record of designing and implementing end-to-end data solutions. Strong technical background in data architecture, data engineering, and data management. Extensive experience on working with any of the hadoop flavours preferably Data Fabric. Experience with presales activities such as solution design, proposal creation, and client presentations. Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud) and related technologies such as data warehousing, data lakes, and data streaming. Experience with Kubernetes and Gen AI tools and tech stack. Excellent communication and interpersonal skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Strong problem-solving skills, with the ability to analyze complex data systems and identify areas for improvement. Strong project management skills, with the ability to manage multiple projects simultaneously and prioritize tasks effectively. Tools and Tech Stack Hadoop Ecosystem Data Architecture and Engineering: Preferred: Cloudera Data Platform (CDP) or Data Fabric. Tools: HDFS, Hive, Spark, HBase, Oozie. Data Warehousing Cloud-based: Azure Synapse, Amazon Redshift, Google Big Query, Snowflake, Azure Synapsis and Azure Data Bricks On-premises: , Teradata, Vertica Data Integration And ETL Tools Apache NiFi, Talend, Informatica, Azure Data Factory, Glue. Cloud Platforms Azure (preferred for its Data Services and Synapse integration), AWS, or GCP. Cloud-native Components Data Lakes: Azure Data Lake Storage, AWS S3, or Google Cloud Storage. Data Streaming: Apache Kafka, Azure Event Hubs, AWS Kinesis. HPE Platforms Data Fabric, AI Essentials or Unified Analytics, HPE MLDM and HPE MLDE AI And Gen AI Technologies AI Lifecycle Management: MLOps: MLflow, KubeFlow, Azure ML, or SageMaker, Ray Inference tools: TensorFlow Serving, K Serve, Seldon Generative AI Frameworks: Hugging Face Transformers, LangChain. Tools: OpenAI API (e.g., GPT-4) Kubernetes Orchestration and Deployment: Platforms: Azure Kubernetes Service (AKS)or Amazon EKS or Google Kubernetes Engine (GKE) or Open Source K8 Tools: Helm CI/CD For Data Pipelines And Applications Jenkins, GitHub Actions, GitLab CI, or Azure DevOps Show more Show less
Posted 1 month ago
0 years
0 Lacs
Chandigarh, India
On-site
Skill Set Required: 2–7 years of experience in software engineering and ML development. Strong proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, or PyTorch. Experience building and evaluating models, along with data preprocessing and feature engineering. Proficiency in REST APIs, Docker, Git, and CI/CD tools. Solid foundation in software engineering principles, including data structures, algorithms, and design patterns. Hands-on experience with MLOps platforms (e.g., MLflow, TFX, Airflow, Kubeflow). Exposure to NLP, large language models (LLMs), or computer vision projects. Experience with cloud platforms (AWS, GCP, Azure) and managed ML services. Contributions to open-source ML libraries or participation in ML competitions (e.g., Kaggle, DrivenData) is a plus. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description Location: Bangalore, Hyderabad, Chennai, Pune, Noida, Trivandrum, Kochi Experience Level: 4+ years Employment Type: Full-time About the Role: We are looking for a passionate and versatile Software Engineer to join our Innovation Team. This role is ideal for someone who thrives in a fast-paced, exploratory environment and is excited about building next-generation solutions using emerging technologies like Generative AI and advanced web frameworks. Key Responsibilities: Design, develop, and maintain scalable front-end applications using React . Build and expose RESTful APIs using Python with Flask or FastAPI . Integrate back-end logic with SQL databases and ensure data flow efficiency. Collaborate on cutting-edge projects involving Generative AI technologies. Deploy and manage applications in a Microsoft Azure cloud environment. Work closely with cross-functional teams including data scientists, product owners, and UX designers to drive innovation from concept to delivery. Must-Have Skills: Strong proficiency in React.js and modern JavaScript frameworks. Hands-on experience with Python , especially using Flask or FastAPI for web development. Good understanding of SQL and relational database concepts. Exposure to Generative AI frameworks and tools. Basic understanding of Microsoft Azure services and deployment processes. Good-to-Have Skills: Knowledge of Machine Learning & AI workflows. Experience working with NoSQL databases like MongoDB or Cosmos DB. Familiarity with MLOps practices and tools (e.g., MLflow, Kubeflow). Understanding of CI/CD pipelines using tools like GitHub Actions, Azure DevOps, or Jenkins. Skills Python, React, SQL basics, Gen AI Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
On-site
About Us At Valiance, we are building next-generation AI solutions to solve high-impact business problems. As part of our AI/ML team, you’ll work on deploying cutting-edge Gen AI models, optimizing performance, and enabling scalable experimentation. Role Overview We are looking for a skilled MLOps Engineer with hands-on experience in deploying open-source Generative AI models on cloud and on-prem environments. The ideal candidate should be adept at setting up scalable infrastructure, observability, and experimentation stacks while optimizing for performance and cost. Responsibilities Deploy and manage open-source Gen AI models (e.g., LLaMA, Mistral, Stable Diffusion) on cloud and on-prem environments Set up and maintain observability stacks (e.g., Prometheus, Grafana, OpenTelemetry) for monitoring Gen AI model health and performance Optimize infrastructure for latency, throughput, and cost-efficiency in GPU/CPU-intensive environments Build and manage an experimentation stack to enable rapid testing of various open-source Gen AI models Work closely with ML scientists and data teams to streamline model deployment pipelines Maintain CI/CD workflows and automate key stages of the model lifecycle Leverage NVIDIA tools (Triton Inference Server, TensorRT, CUDA, etc.) to improve model serving performance (preferred) Required Skills & Qualifications Strong experience in deploying ML/Gen AI models using Kubernetes, Docker, and CI/CD tools Proficiency in Python, Bash scripting, and infrastructure-as-code tools (e.g., Terraform, Helm) Experience with ML observability and monitoring stacks Familiarity with cloud services (GCP, AWS, or Azure) and/or on-prem environments Exposure to model tracking tools like MLflow, Weights & Biases, or similar Bachelor’s/Master’s in Computer Science, Engineering, or related field Nice to Have Hands-on experience with NVIDIA ecosystem (Triton, CUDA, TensorRT, NGC) Familiarity with serving frameworks like vLLM, DeepSpeed, or Hugging Face Transformers Show more Show less
Posted 1 month ago
8 - 10 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Experience- 8-10 years Location- Pune, Mumbai, Bangalore, Noida, Chennai, Coimbatore, Hyderabad JD- Databricks with Data Scientist experience · 4 years of relevant work experience as a data scientist · Minimum 2 years of experience in Azure Cloud using Databricks Services, PySpark, Natural Language API, MLflow · Experience designing and building statistical forecasting models. · Experience in Ingesting data from API and Databases · Experience in Data Transformation using PySpark and SQL · Experience designing and building machine learning models. · Experience designing and building optimization models., including expertise with statistical data analysis · Experience articulating and translating business questions and using statistical techniques to arrive at an answer using available data. · Demonstrated skills in selecting the right statistical tools given a data analysis problem. Effective written and verbal communication skills. · Skillset: Python, Pyspark, Databricks, MLflow, ADF Show more Show less
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As an Account Executive your mission will be to help further build our India business, which is one of our fastest growing markets in APJ. The Databricks Sales Team is driving growth through strategic and innovative partnership with our customers, helping businesses thrive by solving the world's toughest problems with our solutions. You will be inspiring and guiding customers on their data journey, making organisations more collaborative and productive than ever before. You will play an important role in the business in India, with the opportunity to strategically build your territory in close partnership with the business leaders. Using your passion with technology and drive to build, you will help businesses all across India reach their full potential, through the power of Databricks Data Intelligence Platform. You know how to sell innovation and change and can guide deals forward to compress decision cycles. You love understanding a product in-depth and are passionate about communicating its value to customers and partners. Always prospecting for new opportunities, you will close new accounts while growing our business in existing accounts. The Impact You Will Have Prospect for new customers Assess your existing customers and develop a strategy to identify and engage all buying centres Use a solution approach to selling and creating value for customers Identify the most viable use cases in each account to maximise Databricks' impact Orchestrate and work with teams to maximise the impact of the Databricks ecosystem on your territory Build value with all engagements to promote successful negotiations and close Promote the Databricks enterprise cloud data platform Be customer-focused by delivering technical and business results using the Databricks Data Intelligence Platform Promote Teamwork What We Look For You have previously worked in an early-stage company and you know how to navigate and be successful in a fast-growing organisation 5+ years of sales experience in SaaS/PaaS, or Big Data companies Prior customer relationships with CIOs and important decision-makers Simply articulate intricate cloud technologies and big data 3+ years of experience exceeding sales quotas Success closing new accounts while upselling existing accounts Bachelor's Degree About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Must Have Skills: Experience in programming using Python or Java Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG (Retrieval-Augmented Generation) workflows. Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC1 Responsibilities Be a leader, a ‘doer’, with a vision & desire to contribute to a business with a technical project focus and mentality that centers around a superior customer experience. Bring entrepreneurial & innovative flair, with the tenacity to develop and prove delivery ideas independently, then share these as examples to improve the efficiency of the delivery of these solutions across EMEA. Working effectively across internal, external and culturally diverse lines of business to define and deliver customer success. Proactive approach to the task and self-learner. Self-motivated, you have the natural drive to learn and pick up new challenges. Results orientation, you won’t be satisfied until the job is done with the right quality! Ability to work in (virtual) teams. To get a specific job done often requires working together with many colleagues spread out in different countries. Comfortable in a collaborative, agile environment. Ability to adapt to change with a positive mindset. Good communication skills, both oral and written! You are able to not only understand complex technologies but also to explain to others in clear and simple terms. You can articulate key messages very clearly. Being able to present in front of customers. https://www.oracle.com/in/cloud/cloud-lift/ Qualifications Career Level - IC1 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Job Summary We are seeking a highly skilled and motivated Lead DevOps Engineer with a strong background in IIOT environments to join our team. The ideal candidate will possess extensive experience in Azure DevOps, Linux environments, containerization, and virtualization. This role offers the flexibility to fit your unique skillset, and we provide opportunities for continuous learning and professional development. In This Role, Your Responsibilities Will Be: Lead the design, implementation, and maintenance of CI/CD pipelines using Azure DevOps. Manage and deploy artefacts and Docker images on Linux gateways and virtual machines (VMs). Utilize Proxmox for efficient management and orchestration of VMs. Collaborate with development teams to ensure smooth integration of applications (Node.js, Python) into deployment pipelines. Monitor and enhance the performance, scalability, and reliability of IIOT systems. Maintainance and management of infrastructure e.g. container registry, Mlflow and setup test systems. Automate and streamline operations and processes, building and maintaining tools for deployment, monitoring, and operations. Provide technical leadership and mentorship to team members, fostering a culture of continuous improvement and innovation. Troubleshoot and resolve complex issues in development, test, and production environments. Ensure security best practices are followed across all DevOps activities. Stay updated with emerging technologies and industry best practices, and explore how they can be integrated into our processes. Who You Are: You take initiatives and doesn’t wait for instructions and proactively seek opportunities to contribute. You adapt quickly to new situations and apply knowledge effectively. Clearly convey ideas and actively listen to others to complete assigned task as planned. For This Role, You Will Need: Proven experience as a DevOps Engineer, preferably in an IIOT environment. Proficiency in using Azure DevOps for CI/CD pipeline creation and management. Strong expertise in Linux operating systems, including advanced proficiency in Bash scripting. Hands-on experience with containerization technologies such as Docker. Solid understanding of virtualization platforms, particularly Proxmox. Experience with Node.js and Python application deployment and management. Familiarity with infrastructure as code (IaC) and configuration management tools. Excellent problem-solving skills and the ability to work under pressure. Strong communication skills, with the ability to collaborate effectively with cross-functional teams. A proactive mindset with a strong commitment to learning and continuous improvement. Preferred Qualifications that Set You Apart: Certifications in Azure DevOps or related technologies. Experience in other cloud platforms (e.g., AWS, Google Cloud) is a plus. Knowledge of additional scripting or programming languages. Experience with monitoring tools and frameworks. Familiarity with agile methodologies and practices. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. About Us WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. Accessibility Assistance or Accommodation If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go! No calls or agencies please. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Amritsar, Punjab, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, Machine Learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Anyscale At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We’re commercializing Ray, a popular open-source project that's creating an ecosystem of libraries for scalable machine learning. Companies like OpenAI, Uber, Spotify, Instacart, Cruise, and many more, have Ray in their tech stacks to accelerate the progress of AI applications out into the real world. With Anyscale, we’re building the best place to run Ray, so that any developer or data scientist can scale an ML application from their laptop to the cluster without needing to be a distributed systems expert. Proud to be backed by Andreessen Horowitz, NEA, and Addition with $250+ million raised to date. About The Role The ML Development Platform team is responsible for creating the suite of tools and services that enable users to create production quality applications using Ray. The product is the user’s primary interface into the world of Anyscale and by building a polished, stable, and well-designed product, we are able to enable a magical developer experience for our users. This team provides the interface for administering Anyscale components including Anyscale workspaces, production and development tools, ML Ops tools and integrations, and more. Beyond the user-facing features, engineers help build out critical pieces of infrastructure and architecture needed to power our platform at scale. With a taste for good products, a willingness to work with and understand the user base, and technical talent to build high quality software, the engineers can help build a delightful experience for our users from new developers learning to use Ray to businesses powering their products on Anyscale. As Part Of This Role You Will Develop a next-gen ML Ops platform and development tooling centered around Ray Build high quality frameworks for accelerating the AI development lifecycle from data preparation to training to production serving Work with a team of leading distributed systems and machine learning experts Communicate your work to a broader audience through talks, tutorials, and blog posts We'd Love To Hear From You If You Have At least 2 years of backend development with a solid background in algorithms, data structures, and system design Experience working with modern machine learning tooling, including PyTorch, MLFlow, data catalogs, etc. Familiarity with technologies such as Python, FastAPI, or SQLAlchemy Motivated people who are excited to build tools to power the next generation of cloud applications! Bonus Points If You Have Experience in building and maintaining open-source projects. Experience in building and operating machine learning infrastructure in production. Experience in building highly available serving systems. A Snapshot Of Projects You Might Work On Full stack work on Anyscale workspaces, debugging and dependency management on Anyscale Development of new ML Ops tooling and capabilities, like dataset management, experiment and lineage tracking, etc. Lead the development of the Anyscale SDK, authentication, etc. Anyscale Inc. is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law. Anyscale Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish Show more Show less
Posted 1 month ago
4 years
0 Lacs
Pune, Maharashtra, India
On-site
About Position: We are conducting in-person drive for Data Science skills in Pune and Bangalore location on 31st May 2025. We are looking for an experienced and talented GenAI Developer to join our growing data competency team. The ideal candidate will have a strong background in working with GEN AI , ML ,LangChain, LangGraph, GenAI Architecture Strategy, Prompt engineering. You will work closely with our data analysts, engineers, and business teams to ensure optimal performance, scalability, and availability of our data pipelines and analytics. Role: Data Science Location: Pune Bangalore Experience: 4 Years-12 Years Job Type: Full Time Employment What You'll Do: Seeking a Machine Learning Engineer skilled in Langchain, ML modeling, and MLOps to build and deploy production-ready AI systems. Design and deploy ML models and pipelines Build intelligent apps using Langchain & LLMs Implement MLOps workflows for training, deployment, monitoring Ensure model reproducibility and performance at scale Expertise You'll Bring: Python, Scikit-learn, TensorFlow/PyTorch Langchain, LLM integration MLOps tools (MLflow, Kubeflow, DVC) Docker, Git, CI/CD Cloud (AWS/GCP/Azure) Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.” Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
On-site
About Demandbase: Demandbase helps B2B companies hit their revenue goals using fewer resources. How? By using the power of AI to identify and engage the accounts and buying groups most likely to purchase. Our account-based technology unites sales and marketing teams around insights that you can understand and facilitates quick actions across systems and channels to deliver big wins. As a company, we’re as committed to growing careers as we are to building world-class technology. We invest heavily in people, our culture, and the community around us. We have offices in the San Francisco Bay Area, New York, Seattle, and teams in the UK and India. We are Great Place to Work Certified. We're committed to attracting, developing, retaining, and promoting a diverse workforce. By ensuring that every Demandbase employee is able to bring a diversity of talents to work, we're increasingly capable of achieving our mission to transform the way B2B companies go to market. We encourage people from historically underrepresented backgrounds and all walks of life to apply. Come grow with us at Demandbase! About the Role: As a Senior ML Engineer, you’ll have a strategic role in driving data-driven insights and developing production-level machine learning models to solve high-impact, complex business problems. This role is suited for an individual with a strong foundation in both deep learning and traditional machine learning techniques, capable of handling challenges at scale. You will work across teams to create, optimize, and deploy advanced ML models, combining both modern approaches (like deep neural networks and large language models) and proven algorithms to deliver transformative solutions. Responsibilities: 1.Machine Learning Model Development and Productionization Develop, implement, and productionize scalable ML models to address complex business issues, optimizing for both performance and efficiency. Create and refine models using deep learning architectures as well as traditional ML techniques. Collaborate with ML engineers and data engineers to deploy models at scale in production environments, ensuring model performance remains robust over time. 2.End-to-End Solution Ownership Translate high-level business challenges into data science problems, developing solutions that are both technically sound and aligned with strategic goals. Own the full model lifecycle, from data exploration and feature engineering through to model deployment, monitoring, and continuous improvement. Collaborate with cross-functional teams (product, engineering, analytics & research) to embed data-driven insights into business decisions and product development. End-to-end ownership and resilience in production environment 3.Experimentation, Testing, and Performance Optimization Conduct rigorous A/B tests, evaluate model performance, and iterate on solutions based on feedback and performance metrics. Employ best practices in machine learning experimentation, validation, and hyperparameter tuning to ensure models achieve optimal accuracy and efficiency. 4.Data Management and Quality Assurance Work closely with data engineering teams to ensure high-quality data pipeline design, data integrity, and data processing standards. Actively contribute to data governance initiatives to maintain robust data standards and ensure compliance with best practices in data privacy and ethics. 5.Innovation and Research Stay at the forefront of machine learning research and innovations, particularly in neural networks, generative AI, and LLMs, bringing insights to the team for potential integration. Prototype and experiment with new ML techniques and architectures to improve the capabilities of our data science/ML solutions. Support AI-strategy for Demandbase and align business metrics with data science goals. 6.Mentorship and Team Leadership Mentor junior data scientists/ML Engineers and collaborate with peers, fostering a culture of continuous learning, innovation, and excellence. Lead technical discussions, provide guidance on best practices, and contribute to a collaborative and high-performing team environment. Required Qualifications: Education : B.tech/M.Tech in Computer Science or Data Science. Bachelor’s degree in computer science, statistics, maths, or science. Master’s degree in data science, computer science, or a related field Experience : 8+ years of experience in data science/ML, with a strong emphasis on production-level ML models, including both deep learning and traditional approaches. Technical Skills : Expertise in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Proficiency in Python and experience with data science libraries (e.g., scikit-learn, Pandas, NumPy). Strong grasp of algorithms for both deep neural networks and classical ML (e.g., regression, clustering, SVMs, ensemble models). Experience deploying models in production, using tools like Docker, Kubernetes, and cloud platforms (AWS, GCP). Knowledge of A/B testing, model evaluation metrics, and experimentation best practices. Proficient in SQL and experience with data warehousing solutions. Familiarity with distributed computing frameworks (Spark, Dask) for large-scale data processing. Soft Skills : Exceptional problem-solving skills with a business-driven approach. Strong communication skills to articulate complex ideas and solutions to non-technical stakeholders. Ability to lead projects and mentor team members. Good to have skills Experience with LLMs, transfer learning, or multimodal models for solving advanced business cases. Experience with tools and models such as LLAMA, high-volume recommendation systems, duplicate detection using ML. Understanding of ML Ops practices and tools (e.g., MLflow, Airflow) for streamlined model deployment and monitoring. Experience in data observability, CI/CD What We Offer Opportunity to work in a cutting-edge environment, solving real-world business problems at scale. Competitive compensation and benefits, including health, wellness, and educational allowances. Professional growth opportunities and support for continuous learning. This role is ideal for a data science/ML Engineer who is passionate about applying advanced machine learning and AI to drive business value in a fast-paced, high-impact environment. If you’re eager to innovate and push boundaries in a collaborative and forward-thinking team, we’d love to meet you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Do Eaton Corporation’s Center for Intelligent Power has an opening for a Data Scientist who is passionate about his or her craft. The Data Science Engineer will be involved in the design and development of ML/AI algorithms to solve power management problems. In addition to developing these algorithms, the Data Science Engineer will also be involved in successful integration of algorithms in edge or cloud systems using CI/CD and software release process. The candidate will demonstrate exceptional impact in delivering projects in terms of architecture, technical deliverables and project delivery throughout the project lifecycle. The candidate is expected to be conversant with Agile methodologies and tools Experience on data analysis, ML/AI model development and related development environment that enables ML/AI algorithms for use on various processors or systems Work with a team of experts in deep learning, machine learning, distributed systems, program management, and product teams, and work on all aspects of design, development and delivery of end-to-end pipelines and solutions. Development of technical solutions and implement architectures for project and products along with data engineering and data science teams Participate in architecture, design, and development of new intelligent power technology products and production quality end to end systems Qualifications Bachelor's/ Master’s degree in Data Science or Ph.D. (ongoing) in Data Science 2+ years of progressive experience in delivering technology solutions in a production environment 2+ years of practical data science experience in the application of statistics, machine learning, and analytic approaches with a proven track record of solving critical business problems and uncovering new business opportunities 2 years working with customers (internal and external) on developing requirements and working as a solutions architect to deliver Masters in Data Science or Pursuing PhD in Datascience or equivalent Skills Good Statistical background such as Bayesian networks, hypothesis testing, etc. Hands on development of Deep learning and Machine learning models for Engineering applications like electrical/electronic systems, energy systems, data centers, mechanical systems Hands on experience on ML/DL models such as time series modeling, anomaly detection, root cause analysis, diagnostics, prognostics, pattern detection, data mining, etc. Knowledge on data visualization tools and techniques Programming Knowledge - Python, R, Matlab, C/C++, Java, PySpark, SparkR Azure ML Pipeline, Databricks, MLFlow Hands on optimization techniques like dynamic programming, particle swarm optimization, etc. SW Development life-cycle process & tools Agile development methodologies and concepts including handson with Jira, bitbucket and confluence. ]]> Show more Show less
Posted 1 month ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Key Responsibilities: Development and training of foundational models across modalities End to end lifecycle management of foundational model development from data curation to model deployment by collaborating with the core team members Conduct research to advance model accuracy and efficiency. Implement state-of-the-art AI techniques in Text/Speech and language processing. Collaborate with cross-functional teams to build robust AI stacks and integrate them seamlessly into production pipelines. Develop pipelines for debugging, CI/CD and observability of the development process. Demonstrated ability to lead projects and provide innovative solutions. Should document technical processes, model architectures, and experimental results., maintain clear and organized code repositories. Education: Bachelor’s or Master’s in any related field with 2 to 5 years of experience in industry in applied AI/ML. Minimum Requirements: Proficiency in Python programming and familiarity with 3-4 from the list of tools specified below: Foundational model libraries and frameworks (TensorFlow, PyTorch, HF Transformers, NeMo, etc) Experience with distributed training (SLURM, Ray, Pytorch DDP, Deepspeed, NCCL, etc) Inference servers (vLLM) Version control systems and observability (Git, DVC, MLFlow, W&B, KubeFlow) Data analysis and curation tools (Dask, Milvus, Apache Spark, Numpy) Text-to-Speech tools (Whisper, Voicebox, VALL-E (X), HuBERT/Unitspeech) LLMOPs Tools, Dockers etc Hands on experience with AI application libraries and frameworks (DSPy, Langgraph, langchain, llamaindex etc.) Show more Show less
Posted 1 month ago
5 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job description We’re Hiring: MLOps Engineer (Azure) 🔹 Location: Ahmedabad, Gujarat 🔹 Experience: 3–5 Years * Immediate joiner will be prefer Job Summary: We are seeking a skilled and proactive MLOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities: MLOps: ● Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps, GitHub Actions, or Jenkins. ● Automate model training, validation, deployment, and monitoring using tools such as Azure ML, MLflow, or KubeFlow. ● Manage model versioning, performance tracking, and rollback strategies. ● Integrate machine learning models with APIs or web services using Azure Functions, Azure Kubernetes Service (AKS), or Azure App Services. DataOps: ● Design, build, and maintain scalable data ingestion, transformation, and orchestration pipelines using Azure Data Factory, Synapse Pipelines, or Apache Airflow. ● Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. ● Monitor and optimize data workflows for performance and cost efficiency. ● Support batch and real-time data processing using Azure Stream Analytics, Event Hubs, Databricks, or Kafka. Required Skills: ● Strong hands-on experience with Azure Machine Learning, Azure Data Factory, Azure DevOps, and Azure Storage solutions. ● Proficiency in Python, Bash, and scripting for automation. ● Experience with Docker, Kubernetes, and containerized deployments in Azure. ● Good understanding of CI/CD principles, testing strategies, and ML lifecycle management. ● Familiarity with monitoring, logging, and alerting in cloud environments. ● Knowledge of data modeling, data warehousing, and SQL. Preferred Qualifications: ● Azure Certifications (e.g., Azure Data Engineer Associate, Azure AI Engineer Associate, or Azure DevOps Engineer Expert). ● Experience with Databricks, Delta Lake, or Apache Spark on Azure. ● Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills: ● Strong problem-solving and communication skills. ● Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. ● Passion for automation, optimization, and driving operational excellence. Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Mohali, Punjab
On-site
Senior Data Engineer (6-7 Years Experience minimum) Location: Mohali, Punjab (Full-Time, Onsite) Company: Data Couch Pvt. Ltd. About Data Couch Pvt. Ltd. Data Couch Pvt. Ltd. is a premier consulting and enterprise training company specializing in Data Engineering, Big Data, Cloud Technologies, DevOps, and AI/ML . With a strong presence across India and global client partnerships, we deliver impactful solutions and upskill teams across industries. Our expert consultants and trainers work with the latest technologies to empower digital transformation and data-driven decision-making for businesses. Technologies We Work With At Data Couch, you’ll gain exposure to a wide range of modern tools and technologies, including: Big Data: Apache Spark, Hadoop, Hive, HBase, Pig Cloud Platforms: AWS, GCP, Microsoft Azure Programming: Python, Scala, SQL, PySpark DevOps & Orchestration: Kubernetes, Docker, Jenkins, Terraform Data Engineering Tools: Apache Airflow, Kafka, Flink, NiFi Data Warehousing: Snowflake, Amazon Redshift, Google BigQuery Analytics & Visualization: Power BI, Tableau Machine Learning & MLOps: MLflow, Databricks, TensorFlow, PyTorch Version Control & CI/CD: Git, GitLab CI/CD, CircleCI Key Responsibilities Design, build, and maintain robust and scalable data pipelines using PySpark Leverage Hadoop ecosystem (HDFS, Hive, etc.) for big data processing Develop and deploy data workflows in cloud environments (AWS, GCP, or Azure) Use Kubernetes to manage and orchestrate containerized data services Collaborate with cross-functional teams to develop integrated data solutions Monitor and optimize data workflows for performance, reliability, and security Follow best practices for data governance , compliance, and documentation Must-Have Skills Proficiency in PySpark for ETL and data transformation tasks Hands-on experience with at least one cloud platform (AWS, GCP, or Azure) Strong grasp of Hadoop ecosystem tools such as HDFS, Hive, etc. Practical experience in Kubernetes for service orchestration Proficiency in Python and SQL Experience working with large-scale, distributed data systems Familiarity with tools like Apache Airflow , Kafka , or Databricks Experience working with data warehouses like Snowflake, Redshift, or BigQuery Exposure to MLOps or integration of AI/ML pipelines Understanding of CI/CD pipelines and DevOps practices for data workflows What We Offer Opportunity to work on cutting-edge data projects with global clients A collaborative, innovation-driven work culture Continuous learning via internal training, certifications, and mentorship Competitive compensation and growth opportunities Job Type: Full-time Pay: ₹1,200,000.00 - ₹15,000,000.00 per year Benefits: Health insurance Leave encashment Paid sick time Paid time off Schedule: Day shift Work Location: In person
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
23751 Jobs | Dublin
Wipro
12469 Jobs | Bengaluru
EY
8625 Jobs | London
Accenture in India
7339 Jobs | Dublin 2
Uplers
7127 Jobs | Ahmedabad
Amazon
6778 Jobs | Seattle,WA
IBM
6514 Jobs | Armonk
Oracle
6388 Jobs | Redwood City
Muthoot FinCorp (MFL)
5532 Jobs | New Delhi
Capgemini
4741 Jobs | Paris,France