Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Work closely with clients to understand their business requirements and design data solutions that meet their needs. Develop and implement end-to-end data solutions that include data ingestion, data storage, data processing, and data visualization components. Design and implement data architectures that are scalable, secure, and compliant with industry standards. Work with data engineers, data analysts, and other stakeholders to ensure the successful delivery of data solutions. Participate in presales activities, including solution design, proposal creation, and client presentations. Act as a technical liaison between the client and our internal teams, providing technical guidance and expertise throughout the project lifecycle. Stay up-to-date with industry trends and emerging technologies related to data architecture and engineering. Develop and maintain relationships with clients to ensure their ongoing satisfaction and identify opportunities for additional business. Understands Entire End to End AI Life Cycle starting from Ingestion to Inferencing along with Operations. Exposure to Gen AI Emerging technologies. Exposure to Kubernetes Platform and hands on deploying and containorizing Applications. Good Knowledge on Data Governance, data warehousing and data modelling. Requirements Bachelor's or Master's degree in Computer Science, Data Science, or related field. 10+ years of experience as a Data Solution Architect, with a proven track record of designing and implementing end-to-end data solutions. Strong technical background in data architecture, data engineering, and data management. Extensive experience on working with any of the hadoop flavours preferably Data Fabric. Experience with presales activities such as solution design, proposal creation, and client presentations. Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud) and related technologies such as data warehousing, data lakes, and data streaming. Experience with Kubernetes and Gen AI tools and tech stack. Excellent communication and interpersonal skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Strong problem-solving skills, with the ability to analyze complex data systems and identify areas for improvement. Strong project management skills, with the ability to manage multiple projects simultaneously and prioritize tasks effectively. Tools and Tech Stack Hadoop Ecosystem Data Architecture and Engineering: Preferred: Cloudera Data Platform (CDP) or Data Fabric. Tools: HDFS, Hive, Spark, HBase, Oozie. Data Warehousing Cloud-based: Azure Synapse, Amazon Redshift, Google Big Query, Snowflake, Azure Synapsis and Azure Data Bricks On-premises: , Teradata, Vertica Data Integration And ETL Tools Apache NiFi, Talend, Informatica, Azure Data Factory, Glue. Cloud Platforms Azure (preferred for its Data Services and Synapse integration), AWS, or GCP. Cloud-native Components Data Lakes: Azure Data Lake Storage, AWS S3, or Google Cloud Storage. Data Streaming: Apache Kafka, Azure Event Hubs, AWS Kinesis. HPE Platforms Data Fabric, AI Essentials or Unified Analytics, HPE MLDM and HPE MLDE AI And Gen AI Technologies AI Lifecycle Management: MLOps: MLflow, KubeFlow, Azure ML, or SageMaker, Ray Inference tools: TensorFlow Serving, K Serve, Seldon Generative AI Frameworks: Hugging Face Transformers, LangChain. Tools: OpenAI API (e.g., GPT-4) Kubernetes Orchestration and Deployment: Platforms: Azure Kubernetes Service (AKS)or Amazon EKS or Google Kubernetes Engine (GKE) or Open Source K8 Tools: Helm CI/CD For Data Pipelines And Applications Jenkins, GitHub Actions, GitLab CI, or Azure DevOps Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Chandigarh, India
On-site
Skill Set Required: 2–7 years of experience in software engineering and ML development. Strong proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, or PyTorch. Experience building and evaluating models, along with data preprocessing and feature engineering. Proficiency in REST APIs, Docker, Git, and CI/CD tools. Solid foundation in software engineering principles, including data structures, algorithms, and design patterns. Hands-on experience with MLOps platforms (e.g., MLflow, TFX, Airflow, Kubeflow). Exposure to NLP, large language models (LLMs), or computer vision projects. Experience with cloud platforms (AWS, GCP, Azure) and managed ML services. Contributions to open-source ML libraries or participation in ML competitions (e.g., Kaggle, DrivenData) is a plus. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description Location: Bangalore, Hyderabad, Chennai, Pune, Noida, Trivandrum, Kochi Experience Level: 4+ years Employment Type: Full-time About the Role: We are looking for a passionate and versatile Software Engineer to join our Innovation Team. This role is ideal for someone who thrives in a fast-paced, exploratory environment and is excited about building next-generation solutions using emerging technologies like Generative AI and advanced web frameworks. Key Responsibilities: Design, develop, and maintain scalable front-end applications using React . Build and expose RESTful APIs using Python with Flask or FastAPI . Integrate back-end logic with SQL databases and ensure data flow efficiency. Collaborate on cutting-edge projects involving Generative AI technologies. Deploy and manage applications in a Microsoft Azure cloud environment. Work closely with cross-functional teams including data scientists, product owners, and UX designers to drive innovation from concept to delivery. Must-Have Skills: Strong proficiency in React.js and modern JavaScript frameworks. Hands-on experience with Python , especially using Flask or FastAPI for web development. Good understanding of SQL and relational database concepts. Exposure to Generative AI frameworks and tools. Basic understanding of Microsoft Azure services and deployment processes. Good-to-Have Skills: Knowledge of Machine Learning & AI workflows. Experience working with NoSQL databases like MongoDB or Cosmos DB. Familiarity with MLOps practices and tools (e.g., MLflow, Kubeflow). Understanding of CI/CD pipelines using tools like GitHub Actions, Azure DevOps, or Jenkins. Skills Python, React, SQL basics, Gen AI Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
India
On-site
About Us At Valiance, we are building next-generation AI solutions to solve high-impact business problems. As part of our AI/ML team, you’ll work on deploying cutting-edge Gen AI models, optimizing performance, and enabling scalable experimentation. Role Overview We are looking for a skilled MLOps Engineer with hands-on experience in deploying open-source Generative AI models on cloud and on-prem environments. The ideal candidate should be adept at setting up scalable infrastructure, observability, and experimentation stacks while optimizing for performance and cost. Responsibilities Deploy and manage open-source Gen AI models (e.g., LLaMA, Mistral, Stable Diffusion) on cloud and on-prem environments Set up and maintain observability stacks (e.g., Prometheus, Grafana, OpenTelemetry) for monitoring Gen AI model health and performance Optimize infrastructure for latency, throughput, and cost-efficiency in GPU/CPU-intensive environments Build and manage an experimentation stack to enable rapid testing of various open-source Gen AI models Work closely with ML scientists and data teams to streamline model deployment pipelines Maintain CI/CD workflows and automate key stages of the model lifecycle Leverage NVIDIA tools (Triton Inference Server, TensorRT, CUDA, etc.) to improve model serving performance (preferred) Required Skills & Qualifications Strong experience in deploying ML/Gen AI models using Kubernetes, Docker, and CI/CD tools Proficiency in Python, Bash scripting, and infrastructure-as-code tools (e.g., Terraform, Helm) Experience with ML observability and monitoring stacks Familiarity with cloud services (GCP, AWS, or Azure) and/or on-prem environments Exposure to model tracking tools like MLflow, Weights & Biases, or similar Bachelor’s/Master’s in Computer Science, Engineering, or related field Nice to Have Hands-on experience with NVIDIA ecosystem (Triton, CUDA, TensorRT, NGC) Familiarity with serving frameworks like vLLM, DeepSpeed, or Hugging Face Transformers Show more Show less
Posted 4 weeks ago
8 - 10 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Experience- 8-10 years Location- Pune, Mumbai, Bangalore, Noida, Chennai, Coimbatore, Hyderabad JD- Databricks with Data Scientist experience · 4 years of relevant work experience as a data scientist · Minimum 2 years of experience in Azure Cloud using Databricks Services, PySpark, Natural Language API, MLflow · Experience designing and building statistical forecasting models. · Experience in Ingesting data from API and Databases · Experience in Data Transformation using PySpark and SQL · Experience designing and building machine learning models. · Experience designing and building optimization models., including expertise with statistical data analysis · Experience articulating and translating business questions and using statistical techniques to arrive at an answer using available data. · Demonstrated skills in selecting the right statistical tools given a data analysis problem. Effective written and verbal communication skills. · Skillset: Python, Pyspark, Databricks, MLflow, ADF Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As an Account Executive your mission will be to help further build our India business, which is one of our fastest growing markets in APJ. The Databricks Sales Team is driving growth through strategic and innovative partnership with our customers, helping businesses thrive by solving the world's toughest problems with our solutions. You will be inspiring and guiding customers on their data journey, making organisations more collaborative and productive than ever before. You will play an important role in the business in India, with the opportunity to strategically build your territory in close partnership with the business leaders. Using your passion with technology and drive to build, you will help businesses all across India reach their full potential, through the power of Databricks Data Intelligence Platform. You know how to sell innovation and change and can guide deals forward to compress decision cycles. You love understanding a product in-depth and are passionate about communicating its value to customers and partners. Always prospecting for new opportunities, you will close new accounts while growing our business in existing accounts. The Impact You Will Have Prospect for new customers Assess your existing customers and develop a strategy to identify and engage all buying centres Use a solution approach to selling and creating value for customers Identify the most viable use cases in each account to maximise Databricks' impact Orchestrate and work with teams to maximise the impact of the Databricks ecosystem on your territory Build value with all engagements to promote successful negotiations and close Promote the Databricks enterprise cloud data platform Be customer-focused by delivering technical and business results using the Databricks Data Intelligence Platform Promote Teamwork What We Look For You have previously worked in an early-stage company and you know how to navigate and be successful in a fast-growing organisation 5+ years of sales experience in SaaS/PaaS, or Big Data companies Prior customer relationships with CIOs and important decision-makers Simply articulate intricate cloud technologies and big data 3+ years of experience exceeding sales quotas Success closing new accounts while upselling existing accounts Bachelor's Degree About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Must Have Skills: Experience in programming using Python or Java Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG (Retrieval-Augmented Generation) workflows. Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC1 Responsibilities Be a leader, a ‘doer’, with a vision & desire to contribute to a business with a technical project focus and mentality that centers around a superior customer experience. Bring entrepreneurial & innovative flair, with the tenacity to develop and prove delivery ideas independently, then share these as examples to improve the efficiency of the delivery of these solutions across EMEA. Working effectively across internal, external and culturally diverse lines of business to define and deliver customer success. Proactive approach to the task and self-learner. Self-motivated, you have the natural drive to learn and pick up new challenges. Results orientation, you won’t be satisfied until the job is done with the right quality! Ability to work in (virtual) teams. To get a specific job done often requires working together with many colleagues spread out in different countries. Comfortable in a collaborative, agile environment. Ability to adapt to change with a positive mindset. Good communication skills, both oral and written! You are able to not only understand complex technologies but also to explain to others in clear and simple terms. You can articulate key messages very clearly. Being able to present in front of customers. https://www.oracle.com/in/cloud/cloud-lift/ Qualifications Career Level - IC1 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Job Summary We are seeking a highly skilled and motivated Lead DevOps Engineer with a strong background in IIOT environments to join our team. The ideal candidate will possess extensive experience in Azure DevOps, Linux environments, containerization, and virtualization. This role offers the flexibility to fit your unique skillset, and we provide opportunities for continuous learning and professional development. In This Role, Your Responsibilities Will Be: Lead the design, implementation, and maintenance of CI/CD pipelines using Azure DevOps. Manage and deploy artefacts and Docker images on Linux gateways and virtual machines (VMs). Utilize Proxmox for efficient management and orchestration of VMs. Collaborate with development teams to ensure smooth integration of applications (Node.js, Python) into deployment pipelines. Monitor and enhance the performance, scalability, and reliability of IIOT systems. Maintainance and management of infrastructure e.g. container registry, Mlflow and setup test systems. Automate and streamline operations and processes, building and maintaining tools for deployment, monitoring, and operations. Provide technical leadership and mentorship to team members, fostering a culture of continuous improvement and innovation. Troubleshoot and resolve complex issues in development, test, and production environments. Ensure security best practices are followed across all DevOps activities. Stay updated with emerging technologies and industry best practices, and explore how they can be integrated into our processes. Who You Are: You take initiatives and doesn’t wait for instructions and proactively seek opportunities to contribute. You adapt quickly to new situations and apply knowledge effectively. Clearly convey ideas and actively listen to others to complete assigned task as planned. For This Role, You Will Need: Proven experience as a DevOps Engineer, preferably in an IIOT environment. Proficiency in using Azure DevOps for CI/CD pipeline creation and management. Strong expertise in Linux operating systems, including advanced proficiency in Bash scripting. Hands-on experience with containerization technologies such as Docker. Solid understanding of virtualization platforms, particularly Proxmox. Experience with Node.js and Python application deployment and management. Familiarity with infrastructure as code (IaC) and configuration management tools. Excellent problem-solving skills and the ability to work under pressure. Strong communication skills, with the ability to collaborate effectively with cross-functional teams. A proactive mindset with a strong commitment to learning and continuous improvement. Preferred Qualifications that Set You Apart: Certifications in Azure DevOps or related technologies. Experience in other cloud platforms (e.g., AWS, Google Cloud) is a plus. Knowledge of additional scripting or programming languages. Experience with monitoring tools and frameworks. Familiarity with agile methodologies and practices. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. About Us WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. Accessibility Assistance or Accommodation If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go! No calls or agencies please. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Amritsar, Punjab, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, Machine Learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Anyscale At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We’re commercializing Ray, a popular open-source project that's creating an ecosystem of libraries for scalable machine learning. Companies like OpenAI, Uber, Spotify, Instacart, Cruise, and many more, have Ray in their tech stacks to accelerate the progress of AI applications out into the real world. With Anyscale, we’re building the best place to run Ray, so that any developer or data scientist can scale an ML application from their laptop to the cluster without needing to be a distributed systems expert. Proud to be backed by Andreessen Horowitz, NEA, and Addition with $250+ million raised to date. About The Role The ML Development Platform team is responsible for creating the suite of tools and services that enable users to create production quality applications using Ray. The product is the user’s primary interface into the world of Anyscale and by building a polished, stable, and well-designed product, we are able to enable a magical developer experience for our users. This team provides the interface for administering Anyscale components including Anyscale workspaces, production and development tools, ML Ops tools and integrations, and more. Beyond the user-facing features, engineers help build out critical pieces of infrastructure and architecture needed to power our platform at scale. With a taste for good products, a willingness to work with and understand the user base, and technical talent to build high quality software, the engineers can help build a delightful experience for our users from new developers learning to use Ray to businesses powering their products on Anyscale. As Part Of This Role You Will Develop a next-gen ML Ops platform and development tooling centered around Ray Build high quality frameworks for accelerating the AI development lifecycle from data preparation to training to production serving Work with a team of leading distributed systems and machine learning experts Communicate your work to a broader audience through talks, tutorials, and blog posts We'd Love To Hear From You If You Have At least 2 years of backend development with a solid background in algorithms, data structures, and system design Experience working with modern machine learning tooling, including PyTorch, MLFlow, data catalogs, etc. Familiarity with technologies such as Python, FastAPI, or SQLAlchemy Motivated people who are excited to build tools to power the next generation of cloud applications! Bonus Points If You Have Experience in building and maintaining open-source projects. Experience in building and operating machine learning infrastructure in production. Experience in building highly available serving systems. A Snapshot Of Projects You Might Work On Full stack work on Anyscale workspaces, debugging and dependency management on Anyscale Development of new ML Ops tooling and capabilities, like dataset management, experiment and lineage tracking, etc. Lead the development of the Anyscale SDK, authentication, etc. Anyscale Inc. is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law. Anyscale Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish Show more Show less
Posted 4 weeks ago
4 years
0 Lacs
Pune, Maharashtra, India
On-site
About Position: We are conducting in-person drive for Data Science skills in Pune and Bangalore location on 31st May 2025. We are looking for an experienced and talented GenAI Developer to join our growing data competency team. The ideal candidate will have a strong background in working with GEN AI , ML ,LangChain, LangGraph, GenAI Architecture Strategy, Prompt engineering. You will work closely with our data analysts, engineers, and business teams to ensure optimal performance, scalability, and availability of our data pipelines and analytics. Role: Data Science Location: Pune Bangalore Experience: 4 Years-12 Years Job Type: Full Time Employment What You'll Do: Seeking a Machine Learning Engineer skilled in Langchain, ML modeling, and MLOps to build and deploy production-ready AI systems. Design and deploy ML models and pipelines Build intelligent apps using Langchain & LLMs Implement MLOps workflows for training, deployment, monitoring Ensure model reproducibility and performance at scale Expertise You'll Bring: Python, Scikit-learn, TensorFlow/PyTorch Langchain, LLM integration MLOps tools (MLflow, Kubeflow, DVC) Docker, Git, CI/CD Cloud (AWS/GCP/Azure) Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.” Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
India
On-site
About Demandbase: Demandbase helps B2B companies hit their revenue goals using fewer resources. How? By using the power of AI to identify and engage the accounts and buying groups most likely to purchase. Our account-based technology unites sales and marketing teams around insights that you can understand and facilitates quick actions across systems and channels to deliver big wins. As a company, we’re as committed to growing careers as we are to building world-class technology. We invest heavily in people, our culture, and the community around us. We have offices in the San Francisco Bay Area, New York, Seattle, and teams in the UK and India. We are Great Place to Work Certified. We're committed to attracting, developing, retaining, and promoting a diverse workforce. By ensuring that every Demandbase employee is able to bring a diversity of talents to work, we're increasingly capable of achieving our mission to transform the way B2B companies go to market. We encourage people from historically underrepresented backgrounds and all walks of life to apply. Come grow with us at Demandbase! About the Role: As a Senior ML Engineer, you’ll have a strategic role in driving data-driven insights and developing production-level machine learning models to solve high-impact, complex business problems. This role is suited for an individual with a strong foundation in both deep learning and traditional machine learning techniques, capable of handling challenges at scale. You will work across teams to create, optimize, and deploy advanced ML models, combining both modern approaches (like deep neural networks and large language models) and proven algorithms to deliver transformative solutions. Responsibilities: 1.Machine Learning Model Development and Productionization Develop, implement, and productionize scalable ML models to address complex business issues, optimizing for both performance and efficiency. Create and refine models using deep learning architectures as well as traditional ML techniques. Collaborate with ML engineers and data engineers to deploy models at scale in production environments, ensuring model performance remains robust over time. 2.End-to-End Solution Ownership Translate high-level business challenges into data science problems, developing solutions that are both technically sound and aligned with strategic goals. Own the full model lifecycle, from data exploration and feature engineering through to model deployment, monitoring, and continuous improvement. Collaborate with cross-functional teams (product, engineering, analytics & research) to embed data-driven insights into business decisions and product development. End-to-end ownership and resilience in production environment 3.Experimentation, Testing, and Performance Optimization Conduct rigorous A/B tests, evaluate model performance, and iterate on solutions based on feedback and performance metrics. Employ best practices in machine learning experimentation, validation, and hyperparameter tuning to ensure models achieve optimal accuracy and efficiency. 4.Data Management and Quality Assurance Work closely with data engineering teams to ensure high-quality data pipeline design, data integrity, and data processing standards. Actively contribute to data governance initiatives to maintain robust data standards and ensure compliance with best practices in data privacy and ethics. 5.Innovation and Research Stay at the forefront of machine learning research and innovations, particularly in neural networks, generative AI, and LLMs, bringing insights to the team for potential integration. Prototype and experiment with new ML techniques and architectures to improve the capabilities of our data science/ML solutions. Support AI-strategy for Demandbase and align business metrics with data science goals. 6.Mentorship and Team Leadership Mentor junior data scientists/ML Engineers and collaborate with peers, fostering a culture of continuous learning, innovation, and excellence. Lead technical discussions, provide guidance on best practices, and contribute to a collaborative and high-performing team environment. Required Qualifications: Education : B.tech/M.Tech in Computer Science or Data Science. Bachelor’s degree in computer science, statistics, maths, or science. Master’s degree in data science, computer science, or a related field Experience : 8+ years of experience in data science/ML, with a strong emphasis on production-level ML models, including both deep learning and traditional approaches. Technical Skills : Expertise in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Proficiency in Python and experience with data science libraries (e.g., scikit-learn, Pandas, NumPy). Strong grasp of algorithms for both deep neural networks and classical ML (e.g., regression, clustering, SVMs, ensemble models). Experience deploying models in production, using tools like Docker, Kubernetes, and cloud platforms (AWS, GCP). Knowledge of A/B testing, model evaluation metrics, and experimentation best practices. Proficient in SQL and experience with data warehousing solutions. Familiarity with distributed computing frameworks (Spark, Dask) for large-scale data processing. Soft Skills : Exceptional problem-solving skills with a business-driven approach. Strong communication skills to articulate complex ideas and solutions to non-technical stakeholders. Ability to lead projects and mentor team members. Good to have skills Experience with LLMs, transfer learning, or multimodal models for solving advanced business cases. Experience with tools and models such as LLAMA, high-volume recommendation systems, duplicate detection using ML. Understanding of ML Ops practices and tools (e.g., MLflow, Airflow) for streamlined model deployment and monitoring. Experience in data observability, CI/CD What We Offer Opportunity to work in a cutting-edge environment, solving real-world business problems at scale. Competitive compensation and benefits, including health, wellness, and educational allowances. Professional growth opportunities and support for continuous learning. This role is ideal for a data science/ML Engineer who is passionate about applying advanced machine learning and AI to drive business value in a fast-paced, high-impact environment. If you’re eager to innovate and push boundaries in a collaborative and forward-thinking team, we’d love to meet you! Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Do Eaton Corporation’s Center for Intelligent Power has an opening for a Data Scientist who is passionate about his or her craft. The Data Science Engineer will be involved in the design and development of ML/AI algorithms to solve power management problems. In addition to developing these algorithms, the Data Science Engineer will also be involved in successful integration of algorithms in edge or cloud systems using CI/CD and software release process. The candidate will demonstrate exceptional impact in delivering projects in terms of architecture, technical deliverables and project delivery throughout the project lifecycle. The candidate is expected to be conversant with Agile methodologies and tools Experience on data analysis, ML/AI model development and related development environment that enables ML/AI algorithms for use on various processors or systems Work with a team of experts in deep learning, machine learning, distributed systems, program management, and product teams, and work on all aspects of design, development and delivery of end-to-end pipelines and solutions. Development of technical solutions and implement architectures for project and products along with data engineering and data science teams Participate in architecture, design, and development of new intelligent power technology products and production quality end to end systems Qualifications Bachelor's/ Master’s degree in Data Science or Ph.D. (ongoing) in Data Science 2+ years of progressive experience in delivering technology solutions in a production environment 2+ years of practical data science experience in the application of statistics, machine learning, and analytic approaches with a proven track record of solving critical business problems and uncovering new business opportunities 2 years working with customers (internal and external) on developing requirements and working as a solutions architect to deliver Masters in Data Science or Pursuing PhD in Datascience or equivalent Skills Good Statistical background such as Bayesian networks, hypothesis testing, etc. Hands on development of Deep learning and Machine learning models for Engineering applications like electrical/electronic systems, energy systems, data centers, mechanical systems Hands on experience on ML/DL models such as time series modeling, anomaly detection, root cause analysis, diagnostics, prognostics, pattern detection, data mining, etc. Knowledge on data visualization tools and techniques Programming Knowledge - Python, R, Matlab, C/C++, Java, PySpark, SparkR Azure ML Pipeline, Databricks, MLFlow Hands on optimization techniques like dynamic programming, particle swarm optimization, etc. SW Development life-cycle process & tools Agile development methodologies and concepts including handson with Jira, bitbucket and confluence. ]]> Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Key Responsibilities: Development and training of foundational models across modalities End to end lifecycle management of foundational model development from data curation to model deployment by collaborating with the core team members Conduct research to advance model accuracy and efficiency. Implement state-of-the-art AI techniques in Text/Speech and language processing. Collaborate with cross-functional teams to build robust AI stacks and integrate them seamlessly into production pipelines. Develop pipelines for debugging, CI/CD and observability of the development process. Demonstrated ability to lead projects and provide innovative solutions. Should document technical processes, model architectures, and experimental results., maintain clear and organized code repositories. Education: Bachelor’s or Master’s in any related field with 2 to 5 years of experience in industry in applied AI/ML. Minimum Requirements: Proficiency in Python programming and familiarity with 3-4 from the list of tools specified below: Foundational model libraries and frameworks (TensorFlow, PyTorch, HF Transformers, NeMo, etc) Experience with distributed training (SLURM, Ray, Pytorch DDP, Deepspeed, NCCL, etc) Inference servers (vLLM) Version control systems and observability (Git, DVC, MLFlow, W&B, KubeFlow) Data analysis and curation tools (Dask, Milvus, Apache Spark, Numpy) Text-to-Speech tools (Whisper, Voicebox, VALL-E (X), HuBERT/Unitspeech) LLMOPs Tools, Dockers etc Hands on experience with AI application libraries and frameworks (DSPy, Langgraph, langchain, llamaindex etc.) Show more Show less
Posted 4 weeks ago
5 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job description We’re Hiring: MLOps Engineer (Azure) 🔹 Location: Ahmedabad, Gujarat 🔹 Experience: 3–5 Years * Immediate joiner will be prefer Job Summary: We are seeking a skilled and proactive MLOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities: MLOps: ● Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps, GitHub Actions, or Jenkins. ● Automate model training, validation, deployment, and monitoring using tools such as Azure ML, MLflow, or KubeFlow. ● Manage model versioning, performance tracking, and rollback strategies. ● Integrate machine learning models with APIs or web services using Azure Functions, Azure Kubernetes Service (AKS), or Azure App Services. DataOps: ● Design, build, and maintain scalable data ingestion, transformation, and orchestration pipelines using Azure Data Factory, Synapse Pipelines, or Apache Airflow. ● Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. ● Monitor and optimize data workflows for performance and cost efficiency. ● Support batch and real-time data processing using Azure Stream Analytics, Event Hubs, Databricks, or Kafka. Required Skills: ● Strong hands-on experience with Azure Machine Learning, Azure Data Factory, Azure DevOps, and Azure Storage solutions. ● Proficiency in Python, Bash, and scripting for automation. ● Experience with Docker, Kubernetes, and containerized deployments in Azure. ● Good understanding of CI/CD principles, testing strategies, and ML lifecycle management. ● Familiarity with monitoring, logging, and alerting in cloud environments. ● Knowledge of data modeling, data warehousing, and SQL. Preferred Qualifications: ● Azure Certifications (e.g., Azure Data Engineer Associate, Azure AI Engineer Associate, or Azure DevOps Engineer Expert). ● Experience with Databricks, Delta Lake, or Apache Spark on Azure. ● Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills: ● Strong problem-solving and communication skills. ● Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. ● Passion for automation, optimization, and driving operational excellence. Show more Show less
Posted 4 weeks ago
0.0 years
0 Lacs
Mohali, Punjab
On-site
Senior Data Engineer (6-7 Years Experience minimum) Location: Mohali, Punjab (Full-Time, Onsite) Company: Data Couch Pvt. Ltd. About Data Couch Pvt. Ltd. Data Couch Pvt. Ltd. is a premier consulting and enterprise training company specializing in Data Engineering, Big Data, Cloud Technologies, DevOps, and AI/ML . With a strong presence across India and global client partnerships, we deliver impactful solutions and upskill teams across industries. Our expert consultants and trainers work with the latest technologies to empower digital transformation and data-driven decision-making for businesses. Technologies We Work With At Data Couch, you’ll gain exposure to a wide range of modern tools and technologies, including: Big Data: Apache Spark, Hadoop, Hive, HBase, Pig Cloud Platforms: AWS, GCP, Microsoft Azure Programming: Python, Scala, SQL, PySpark DevOps & Orchestration: Kubernetes, Docker, Jenkins, Terraform Data Engineering Tools: Apache Airflow, Kafka, Flink, NiFi Data Warehousing: Snowflake, Amazon Redshift, Google BigQuery Analytics & Visualization: Power BI, Tableau Machine Learning & MLOps: MLflow, Databricks, TensorFlow, PyTorch Version Control & CI/CD: Git, GitLab CI/CD, CircleCI Key Responsibilities Design, build, and maintain robust and scalable data pipelines using PySpark Leverage Hadoop ecosystem (HDFS, Hive, etc.) for big data processing Develop and deploy data workflows in cloud environments (AWS, GCP, or Azure) Use Kubernetes to manage and orchestrate containerized data services Collaborate with cross-functional teams to develop integrated data solutions Monitor and optimize data workflows for performance, reliability, and security Follow best practices for data governance , compliance, and documentation Must-Have Skills Proficiency in PySpark for ETL and data transformation tasks Hands-on experience with at least one cloud platform (AWS, GCP, or Azure) Strong grasp of Hadoop ecosystem tools such as HDFS, Hive, etc. Practical experience in Kubernetes for service orchestration Proficiency in Python and SQL Experience working with large-scale, distributed data systems Familiarity with tools like Apache Airflow , Kafka , or Databricks Experience working with data warehouses like Snowflake, Redshift, or BigQuery Exposure to MLOps or integration of AI/ML pipelines Understanding of CI/CD pipelines and DevOps practices for data workflows What We Offer Opportunity to work on cutting-edge data projects with global clients A collaborative, innovation-driven work culture Continuous learning via internal training, certifications, and mentorship Competitive compensation and growth opportunities Job Type: Full-time Pay: ₹1,200,000.00 - ₹15,000,000.00 per year Benefits: Health insurance Leave encashment Paid sick time Paid time off Schedule: Day shift Work Location: In person
Posted 4 weeks ago
2 - 8 years
0 Lacs
Bengaluru, Karnataka
Work from Office
Experience- 8-10 years Location- Pune, Mumbai, Bangalore, Noida, Chennai, Coimbatore, Hyderabad JD- · 4 years of relevant work experience as a data scientist · Minimum 2 years of experience in Azure Cloud using Databricks Services, PySpark, Natural Language API, MLflow · Experience designing and building statistical forecasting models. · Experience in Ingesting data from API and Databases · Experience in Data Transformation using PySpark and SQL · Experience designing and building machine learning models. · Experience designing and building optimization models., including expertise with statistical data analysis · Experience articulating and translating business questions and using statistical techniques to arrive at an answer using available data. · Demonstrated skills in selecting the right statistical tools given a data analysis problem. Effective written and verbal communication skills. · Skillset: Python, Pyspark, Databricks, MLflow, ADF Job Types: Full-time, Permanent Pay: From ₹1,500,000.00 per year Schedule: Fixed shift Application Question(s): How soon you can join? What is your current CTC? What is your expected CTC? What is your current location? How many years of experience do you have in Databricks? How many years of experience do you have in Python, Pyspark? How many years of experience do have as a Data Scientist? Experience: total: 8 years (Required) Work Location: In person
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less
Posted 4 weeks ago
0 - 4 years
0 Lacs
Bengaluru, Karnataka
Work from Office
General Information Req # WD00082748 Career area: Artificial Intelligence Country/Region: India State: Karnataka City: BANGALORE Date: Monday, May 19, 2025 Working time: Full-time Additional Locations : India - Karnātaka - Bangalore India - Karnātaka - BANGALORE Why Work at Lenovo We are Lenovo. We do what we say. We own what we do. We WOW our customers. Lenovo is a US$57 billion revenue global technology powerhouse, ranked #248 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com, and read about the latest news via our StoryHub. Description and Requirements Key Responsibilities : Lead end-to-end transitions of AI PoCs into production environments, managing the entire process from testing to final deployment. Configure, install, and validate AI systems using key platforms, including VMware ESXi and vSphere for server virtualization, Linux (Ubuntu/RHEL) and Windows Server for operating system integration, Docker and Kubernetes for containerization and orchestration of AI workloads. Conduct comprehensive performance benchmarking and AI inferencing tests to validate system performance in production. Optimize deployed AI models for accuracy, performance, and scalability to ensure they meet production-level requirements and customer expectations. Serve as the primary technical lead/SME for the AI POC deployment in enterprise environments, focusing on AI solutions powered by Nvidia GPUs. Work hands-on with Nvidia AI Enterprise and GPU-accelerated workloads, ensuring efficient deployment and model performance using frameworks such as PyTorch and TensorFlow. Lead technical optimizations aimed at resource efficiency, ensuring that models are deployed effectively within the customer’s infrastructure. Ensure the readiness of customer environments to handle, maintain, and scale AI solutions post-deployment. take ownership of AI project deployments, overseeing all phases from planning to final deployment, ensuring that timelines and deliverables are met. Collaborate with stakeholders, including cross-functional teams (e.g., Lenovo AI Application, solution architects), customers, and internal resources to coordinate deployments and deliver results on schedule. Implement risk management strategies and develop contingency plans to mitigate potential issues such as hardware failures, network bottlenecks, and software incompatibilities. Maintain ongoing, transparent communication with all relevant stakeholders, providing updates on project status and addressing any issues or changes in scope. Experience : Overall experience 7-10 years Relevant experience of 2-4 years in deploying AI/ML models/ AI solutions using Nvidia GPUs in enterprise production environments. Demonstrated success in leading and managing complex AI infrastructure projects, including PoC transitions to production at scale. Technical Expertise: Experience in the area of Retrieval Augmented Generation (RAG), NVIDIA AI Enterprise, NVIDIA Inference Microservices (NIMs), Model Management, Kubernetes Extensive experience with Nvidia AI Enterprise, GPU-accelerated workloads, and AI/ML frameworks such as PyTorch and TensorFlow. Proficient in deploying AI solutions across enterprise platforms, including VMware ESXi, Docker, Kubernetes, and Linux (Ubuntu/RHEL) and Windows Server environments. MLOps proficiency with hands-on experience using tools such as Kubeflow, MLflow, or AWS SageMaker for managing the AI model lifecycle in production. Strong understanding of virtualization and containerization technologies to ensure robust and scalable deployments. Additional Locations : India - Karnātaka - Bangalore India - Karnātaka - BANGALORE India India - Karnātaka * India - Karnātaka - Bangalore , * India - Karnātaka - BANGALORE NOTICE FOR PUBLIC At Lenovo, we follow strict policies and legal compliance for our recruitment process, which includes role alignment, employment terms discussion, final selection and offer approval, and recording transactions in our internal system. Interviews may be conducted via audio, video, or in-person depending on the role, and you will always meet with an official Lenovo representative. Please beware of fraudulent recruiters posing as Lenovo representatives. They may request cash deposits or personal information. Always apply through official Lenovo channels and never share sensitive information. Lenovo does not solicit money or sensitive information from applicants and will not request payments for training or equipment. Kindly verify job offers through the official Lenovo careers page or contact IndiaTA@lenovo.com. Stay informed and cautious to protect yourself from recruitment fraud. Report any suspicious activity to local authorities.
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a part of BCG X A&A team, you will work closely with consulting teams on a diverse range of advanced analytics topics. You will have the opportunity to leverage analytical methodologies to deliver value to BCG's Consulting (case) teams and Practice Areas (domain) through providing analytics subject matter expertise, and accelerated execution support. You will collaborate with case teams to gather requirements, specify, design, develop, deliver and support analytic solutions serving client needs. You will provide technical support through deeper understanding of relevant data analytics solutions and processes to build high quality and efficient analytic solutions. YOU'RE GOOD AT Working with case (and proposal) teams Acquiring deep expertise in at least one analytics topic & understanding of all analytics capabilities Defining and explaining expected analytics outcome; defining approach selection Delivering original analysis and insights to BCG teams, typically owning all or part of an analytics module and integrating with case teams Establishing credibility by thought partnering with case teams on analytics topics; drawing conclusions on a range of external and internal issues related to their module Communicating analytical insights through sophisticated synthesis and packaging of results (including PowerPoint presentation, Documents, dashboard and charts) with consultants, collects, synthesizes, analyses case team learning & inputs into new best practices and methodologies Build collateral of documents for enhancing core capabilities and supporting reference for internal documents; sanitizing confidential documents and maintaining a repository Able to lead workstreams and modules independently or with minimal supervision Ability to support business development activities (proposals, vignettes etc.) and build sales collateral to generate leads Team requirements: Guides juniors on analytical methodologies and platforms, and helps in quality checks Contributes to team's content & IP development Imparts technical trainings to team members and consulting cohort Technical Skills: Strong proficiency in statistics (concepts & methodologies like hypothesis testing, sampling, etc.) and its application & interpretation Hands-on data mining and predictive modeling experience (Linear Regression, Clustering (K-means, DBSCAN, etc.), Classification (Logistic regression, Decision trees/Random Forest/Boosted Trees), Timeseries (SARIMAX/Prophet)etc. Strong experience in at least one of the prominent cloud providers (Azure, AWS, GCP) and working knowledge of auto ML solutions (Sage Maker, Azure ML etc.) At least one tool in each category; Programming language - Python (Must have), (R Or SAS OR PySpark), SQL (Must have) Data Visualization (Tableau, QlikView, Power BI, Streamlit) , Data management (using Alteryx, MS Access, or any RDBMS) ML Deployment tools (Airflow, MLflow Luigi, Docker etc.) Big data technologies ( Hadoop ecosystem, Spark) Data warehouse solutions (Teradata, Azure SQL DW/Synapse, Redshift, BigQuery etc,) Version Control (Git/Github/Git Lab) MS Office (Excel, PowerPoint, Word) Coding IDE (VS Code/PyCharm) GenAI tools (OpenAI, Google PaLM/BERT, Hugging Face, etc.) Functional Skills: Expertise in building analytical solutions and delivering tangible business value for clients (similar to the use cases below) Price optimization, promotion effectiveness, Product assortment optimization and sales force effectiveness, Personalization/Loyalty programs, Labor Optimization CLM and revenue enhancement (segmentation, cross-sell/up-sell, next product to buy, offer recommendation, loyalty, LTV maximization and churn prevention) Communicating with confidence and ease: You will be a clear and confident communicator, able to deliver messages in a concise manner with strong and effective written and verbal communication. What You'll Bring Bachelor/Master's degree in a field linked to business analytics, statistics or economics, operations research, applied mathematics, computer science, engineering, or related field required; advanced degree preferred At least 2-4 years of relevant industry work experience providing analytics solutions in a commercial setting Prior work experience in a global organization, preferably in a professional services organization in data analytics role Demonstrated depth in one or more industries not limited to but including Retail, CPG, Healthcare, Telco etc Prior work experience in a global organization, preferably in a professional services organization in data analytics role to join our ranks. #BCGXjob Who You'll Work With Our data analytics and artificial intelligence professionals mix deep domain expertise with advanced analytical methods and techniques to develop innovative solutions that help our clients tackle their most pressing issues. We design algorithms and build complex models out of large amounts of data. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, Machine Learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Must Have Skills: Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG (Retrieval-Augmented Generation) workflows. Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience with Oracle Fusion Cloud Applications (ERP and HCM) and their AI capabilities. Proficiency in developing AI chatbots using ODA (Oracle digital Assistance), Fusion APIs, OIC (Oracle integration cloud) and genAI services. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in programming languages (e.g., Java, Python) and databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC3 Responsibilities Be a leader, a ‘doer’, with a vision & desire to contribute to a business with a technical project focus and mentality that centers around a superior customer experience. Bring entrepreneurial & innovative flair, with the tenacity to develop and prove delivery ideas independently, then share these as examples to improve the efficiency of the delivery of these solutions across EMEA. Working effectively across internal, external and culturally diverse lines of business to define and deliver customer success. Proactive approach to the task and self-learner. Self-motivated, you have the natural drive to learn and pick up new challenges. Results orientation, you won’t be satisfied until the job is done with the right quality! Ability to work in (virtual) teams. To get a specific job done often requires working together with many colleagues spread out in different countries. Comfortable in a collaborative, agile environment. Ability to adapt to change with a positive mindset. Good communication skills, both oral and written! You are able to not only understand complex technologies but also to explain to others in clear and simple terms. You can articulate key messages very clearly. Being able to present in front of customers. https://www.oracle.com/in/cloud/cloud-lift/ About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Must Have Skills: Experience in programming using Python or Java Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG (Retrieval-Augmented Generation) workflows. Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC1 Responsibilities Be a leader, a ‘doer’, with a vision & desire to contribute to a business with a technical project focus and mentality that centers around a superior customer experience. Bring entrepreneurial & innovative flair, with the tenacity to develop and prove delivery ideas independently, then share these as examples to improve the efficiency of the delivery of these solutions across EMEA. Working effectively across internal, external and culturally diverse lines of business to define and deliver customer success. Proactive approach to the task and self-learner. Self-motivated, you have the natural drive to learn and pick up new challenges. Results orientation, you won’t be satisfied until the job is done with the right quality! Ability to work in (virtual) teams. To get a specific job done often requires working together with many colleagues spread out in different countries. Comfortable in a collaborative, agile environment. Ability to adapt to change with a positive mindset. Good communication skills, both oral and written! You are able to not only understand complex technologies but also to explain to others in clear and simple terms. You can articulate key messages very clearly. Being able to present in front of customers. https://www.oracle.com/in/cloud/cloud-lift/ Qualifications Career Level - IC1 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Greater Bhopal Area
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, Machine Learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know our Team In Agoda’s Back-End Engineering department, we build scalable, fault-tolerant systems and APIs that host our core business logic. Our systems cover all major areas of our business: inventory and pricing, product information, customer data, communications, partner data, booking systems, payments, and more. We employ state-of-the-art CI/CD and testing techniques to ensure everything works without downtime. Our systems are self-healing, responding gracefully to extreme loads or unexpected input. We use modern languages like Kotlin and Scala, Data technologies Kafka, Spark, MLflow, Kubeflow, VastStorage, StarRocks and agile development practices. Most importantly, we hire great people from around the world and empower them to be successful. The Opportunity Agoda Platform team is looking for developers to work on mission-critical systems that serve millions of users daily. You will have the chance to work on innovative projects, using cutting-edge technologies, and make a significant impact on our business and the travel industry. What You’ll Need To Succeed 5+ years of experience developing performance-critical applications in a production environment using Scala, Java, Kotlin, C#, Go or relevant modern programming languages. Strong RDBMS knowledge (SQL Server, Oracle, MySQL, or other). Ability to design and implement scalable architectures Deeply involved in hands-on coding, developing and maintaining high-quality software solutions on a daily basis Good command of the English language. Engages in cross-team collaborations to ensure cohesive solutions Implement advanced CI/CD pipelines and robust testing strategies to ensure seamless integration, deployment, and high code quality. Passion for software development and continuous improvement of your knowledge and skills. It’s Great if You Have Knowledge in NoSQL, Queueing systems (Kafka, RabbitMQ, ActiveMQ, MSMQ), and Play framework. #newdelhi #Pune #Hyderabad #Bangalore #Mumbai #Bengaluru #Chennai #Kolkata #Lucknow #sanfrancisco #sanjose #losangeles #sandiego #oakland #denver #miami #orlando #atlanta #chicago #boston #detroit #newyork #portland #mexico #sydney #melbourne #toronto #vancouver #shanghai #beijing #shenzhen#estonia #paris #hongkong #budapest #jakarta #bali #kualalumpur #dublin #berlin #telaviv #milan #rome #tokyo #osaka #amsterdam #oslo #manila #warsaw #krakow #moscow #saintpetersburg #capetown #johannesburg #seoul #barcelona #madrid #stockholm #zurich #taipei #bangkok #chiangmai #phuket #istanbul #london #manchester #edinburgh #kiev #hcmc #hanoi #hochimin #wroclaw #poznan #katowice #rio #salvador #dhaka #islamabad #IT #ENG #4 Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . To all recruitment agencies: Agoda does not accept third party resumes. Please do not send resumes to our jobs alias, Agoda employees or any other organization location. Agoda is not responsible for any fees related to unsolicited resumes. Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.
These cities are known for their thriving tech industries and have a high demand for mlflow professionals.
The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
Salaries may vary based on factors such as location, company size, and specific job requirements.
A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager
With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.
In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)
Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.
As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2