Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: In-Person (sftwtrs.ai Lab) Experience Level: Early Career / 1–3 years About sftwtrs.ai sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains. Role Overview As a Research Engineer I , you will work closely with our Principal Scientist and Senior Research Engineers to ideate, prototype, and implement AI/ML models and pipelines. This role bridges research and software development: you’ll both explore novel algorithms (especially in adversarial ML and security automation) and translate successful prototypes into robust, maintainable code. This position is ideal for someone who is passionate about pushing the boundaries of AI research while also possessing strong software engineering skills. Key Responsibilities Research & Prototyping Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts). Rapidly prototype novel model architectures, training schemes, and evaluation pipelines. Design experiments, run benchmarks, and analyze results to validate research hypotheses. Software Development & Integration Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes). Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.). Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving. Collaboration & Documentation Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements. Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications. Participate in regular code reviews, design discussions, and sprint planning sessions. Model Deployment & Monitoring Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack). Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics. Troubleshoot deployment issues, optimize inference pipelines for latency and throughput. Continuous Learning & Contribution Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions. Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit). Mentor interns or junior engineers on machine learning best practices and coding standards. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field. Research Experience: 1–3 years of hands-on experience in AI/ML research or equivalent internships. Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training). Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus. Development Skills: Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch). Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant). Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs. Software Engineering Practices: Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker). Comfortable writing unit tests (pytest or unittest) and conducting code reviews. Understanding of cloud services (AWS, GCP, or Azure) for training and serving models. Analytical & Collaborative Skills: Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines. Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences. Demonstrated ability to collaborate effectively in a small, agile team. Preferred Skills (Not Mandatory) Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended). Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings). Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices. Exposure to Rust or Go for high-performance inference code. Contributions to open-source AI or security automation projects. Why Join Us? Cutting-Edge Research & Production Impact: Work on adversarial ML and security–automation projects that go from concept to real-world deployment. Hands-On Mentorship: Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering. Innovative Environment: Join a lean, highly specialized team where your contributions are immediately visible and valued. Professional Growth: Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development. Competitive Compensation & Benefits: Attractive salary, health insurance, and opportunities for performance-based bonuses. How to Apply Please send a résumé/CV, a brief cover letter outlining relevant AI/ML projects, and any GitHub or portfolio links to careers@sftwtrs.ai with the subject line “RE: Research Engineer I Application.” sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know our Team In Agoda’s Back-End Engineering department, we build scalable, fault-tolerant systems and APIs that host our core business logic. Our systems cover all major areas of our business: inventory and pricing, product information, customer data, communications, partner data, booking systems, payments, and more. We employ state-of-the-art CI/CD and testing techniques to ensure everything works without downtime. Our systems are self-healing, responding gracefully to extreme loads or unexpected input. We use modern languages like Kotlin and Scala, Data technologies Kafka, Spark, MLflow, Kubeflow, VastStorage, StarRocks and agile development practices. Most importantly, we hire great people from around the world and empower them to be successful. The Opportunity Agoda Platform team is looking for developers to work on mission-critical systems that serve millions of users daily. You will have the chance to work on innovative projects, using cutting-edge technologies, and make a significant impact on our business and the travel industry. What You’ll Need To Succeed 5+ years of experience developing performance-critical applications in a production environment using Scala, Java, Kotlin, C#, Go or relevant modern programming languages. Strong RDBMS knowledge (SQL Server, Oracle, MySQL, or other). Ability to design and implement scalable architectures Deeply involved in hands-on coding, developing and maintaining high-quality software solutions on a daily basis Good command of the English language. Engages in cross-team collaborations to ensure cohesive solutions Implement advanced CI/CD pipelines and robust testing strategies to ensure seamless integration, deployment, and high code quality. Passion for software development and continuous improvement of your knowledge and skills. It’s Great if You Have Knowledge in NoSQL, Queueing systems (Kafka, RabbitMQ, ActiveMQ, MSMQ), and Play framework. #newdelhi #Pune #Hyderabad #Bangalore #Mumbai #Bengaluru #Chennai #Kolkata #Lucknow #sanfrancisco #sanjose #losangeles #sandiego #oakland #denver #miami #orlando #atlanta #chicago #boston #detroit #newyork #portland #mexico #sydney #melbourne #toronto #vancouver #shanghai #beijing #shenzhen#estonia #paris #hongkong #budapest #jakarta #bali #kualalumpur #dublin #berlin #telaviv #milan #rome #tokyo #osaka #amsterdam #oslo #manila #warsaw #krakow #moscow #saintpetersburg #capetown #johannesburg #seoul #barcelona #madrid #stockholm #zurich #taipei #bangkok #chiangmai #phuket #istanbul #london #manchester #edinburgh #kiev #hcmc #hanoi #hochimin #wroclaw #poznan #katowice #rio #salvador #dhaka #islamabad #IT #ENG #4 Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee. Show more Show less
Posted 1 week ago
3.0 - 4.0 years
12 - 15 Lacs
Pune
Work from Office
Roles and Responsibilities Design and develop data science solutions using Python. Experience in machine learning and deep learning frameworks (PyTorch, TensorFlow) Develop machine learning models using TensorFlow or PyTorch and deploy them on Kubernetes, MLflow, and KServe clusters. Familiarity with machine learning deployment and MLOps tools Experience working with model repositories and fine-tuning frameworks like Hugging Face. Experience with frameworks like LangGraph/LangChain. Design and implement AI agent workflows. Develop end-to-end intelligent pipelines and multi-agent systems (e.g., LangGraph/LangChain workflows) that coordinate multiple LLM-powered agents to solve complex tasks. Create graph-based or state-machine architectures for AI agents, chaining prompts and tools as needed. Build and fine-tune generative models. Develop, train, and fine-tune advanced generative models (transformers, diffusion models, VAEs, GANs, etc.) on domain-specific data. Deploy and optimize foundation models (such as GPT, LLaMA, Mistral) in production, adapting them to our use cases through prompt engineering and supervised fine-tuning.
Posted 1 week ago
12.0 - 20.0 years
35 - 75 Lacs
Chennai, Coimbatore
Work from Office
Job Description: Designs and implements AI architectures based on client needs Strategically utilizes the in-house data center as a platform for customer AI workloads Collaborates closely with international teams of data scientists, engineers, and infrastructure specialists Aligns AI solutions with compute, storage, and network capacities across private, hybrid, and public infrastructures Makes strategic choices in AI tools and technologies with a focus on open source and scalability Supports MLOps implementations, CI/CD, data pipelines, and lifecycle management Contributes to AI governance, reliability, and solution scalability Keeps up with the latest developments in AI, hardware acceleration, and infrastructure Assists clients in leveraging AI at an enterprise level Requirements: At least 10 years of experience as an AI Architect or in a comparable technical AI role Experience developing AI solutions for clients in an enterprise environment Experience working in an international context with cross-functional teams Experience with AI hardware and solutions from NVIDIA, IBM, and HPE is a plus In-depth knowledge of open-source AI frameworks such as TensorFlow, PyTorch, Hugging Face, MLflow, Kubeflow, etc. Familiarity with containers and orchestration tools like Docker and Kubernetes Understanding of data governance, security, and compliance in AI projects Strong communication skills and a client-oriented mindset Excellent command of the English language, both spoken and written
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description GlobalLogic has been engaged in exploring opportunities to implement ML/AL/GenAI-powered applications for multinational industrial conglomerate since 2023, aiming to enhance business efficiency. After conducting a Discovery phase and completing 3 pilot projects covering use cases of Customer Requirements, Tender Response and Project Design areas, GL proposes to enhance pilots with additional capabilities, identified and confirmed with the client’s experts throughout pilots development, taking the applications to the next MVP (Minimum Viable Product) stage. Requirements In terms of PoP MLOps tech stack we’re using (all open-source): Kubernetes (orchestration) & Docker (containerisation) Kubeflow (MLOps Platform is based on this – used for notebooks and pipelines) MLflow (for experiment tracking) Evidently (for model monitoring) KServe (for deployment of internally-hosted open-source models) LiteLLM (for routing requests to externally-provided models, e.g. Gemini / Azure OpenAI GPT models) Tyk (used as our MLOps Model Service, it’s an API gateway to route requests to the right service and also provides parameter validation of requests) Github actions for CI Job responsibilities In terms of PoP MLOps tech stack we’re using (all open-source): Kubernetes (orchestration) & Docker (containerisation) Kubeflow (MLOps Platform is based on this – used for notebooks and pipelines) MLflow (for experiment tracking) Evidently (for model monitoring) KServe (for deployment of internally-hosted open-source models) LiteLLM (for routing requests to externally-provided models, e.g. Gemini / Azure OpenAI GPT models) Tyk (used as our MLOps Model Service, it’s an API gateway to route requests to the right service and also provides parameter validation of requests) Github actions for CI What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 12+ years of hands on experience Position: Senior Manager Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Deep expertise in AI/ML solution design, including supervised and unsupervised learning, deep learning, NLP, and optimization. Strong hands-on experience with ML/DL frameworks like TensorFlow, PyTorch, scikit-learn, H2O, and XGBoost. Solid programming skills in Python, PySpark, and SQL, with a strong foundation in software engineering principles. Proven track record of building end-to-end AI pipelines, including data ingestion, model training, testing, and production deployment. Experience with MLOps tools such as MLflow, Airflow, DVC, and Kubeflow for model tracking, versioning, and monitoring. Understanding of big data technologies like Apache Spark, Hive, and Delta Lake for scalable model development. Expertise in AI solution deployment across cloud platforms like GCP, AWS, and Azure using services like Vertex AI, SageMaker, and Azure ML. Experience in REST API development, NoSQL database design, and RDBMS design and optimizations. Familiarity with API-based AI integration and containerization technologies like Docker and Kubernetes. Proficiency in data storytelling and visualization tools such as Tableau, Power BI, Looker, and Streamlit. Programming skills in Python and either Scala or R, with experience using Flask and FastAPI. Experience with software engineering practices, including use of GitHub, CI/CD, code testing, and analysis. Proficient in using AI/ML frameworks such as TensorFlow, PyTorch, and SciKit-Learn. Skilled in using Apache Spark, including PySpark and Databricks, for big data processing. Strong understanding of foundational data science concepts, including statistics, linear algebra, and machine learning principles. Knowledgeable in integrating DevOps, MLOps, and DataOps practices to enhance operational efficiency and model deployment. Experience with cloud infrastructure services like Azure and GCP. Proficiency in containerization technologies such as Docker and Kubernetes. Familiarity with observability and monitoring tools like Prometheus and the ELK stack, adhering to SRE principles and techniques. Cloud or Data Engineering certifications or specialization certifications (e.g. Google Professional Machine Learning Engineer, Microsoft Certified: Azure AI Engineer Associate – Exam AI-102, AWS Certified Machine Learning – Specialty (MLS-C01), Databricks Certified Machine Learning) Nice To Have Experience implementing generative AI, LLMs, or advanced NLP use cases Exposure to real-time AI systems, edge deployment, or federated learning Strong executive presence and experience communicating with senior leadership or CXO-level clients Roles And Responsibilities Lead and oversee complex AI/ML programs, ensuring alignment with business strategy and delivering measurable outcomes. Serve as a strategic advisor to clients on AI adoption, architecture decisions, and responsible AI practices. Design and review scalable AI architectures, ensuring performance, security, and compliance. Supervise the development of machine learning pipelines, enabling model training, retraining, monitoring, and automation. Present technical solutions and business value to executive stakeholders through impactful storytelling and data visualization. Build, mentor, and lead high-performing teams of data scientists, ML engineers, and analysts. Drive innovation and capability development in areas such as generative AI, optimization, and real-time analytics. Contribute to business development efforts, including proposal creation, thought leadership, and client engagements. Partner effectively with cross-functional teams to develop, operationalize, integrate, and scale new algorithmic products. Develop code, CI/CD, and MLOps pipelines, including automated tests, and deploy models to cloud compute endpoints. Manage cloud resources and build accelerators to enable other engineers, with experience in working across two hyperscale clouds. Demonstrate effective communication skills, coaching and leading junior engineers, with a successful track record of building production-grade AI products for large organizations. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the team: The AI Core team is a small and nimble team focused on AI innovation, experimental modelling, and scientific validation. Its mission is to build highly efficient, adaptable and efficacious large models for multiple general-purpose applications in chemical metrology. To stay aligned with the mission, the team follows advancements in AI, including next-generation deep learning architectures, autonomous agents, large-scale optimization algorithms, reinforcement learning methodologies, and innovative data simulation techniques. Machine Learning Scientist The Machine Learning Scientist is critical to the AI Core team by creating AI models from the ground up and steering model development through every stage, leading to the fulfilment of the research objective. Responsibilities: Experiment with advanced architectures like CNNs, RNNs, transformers, autoencoders etc. appropriate for the research objective Develop training strategies (e.g., self-supervised learning, few-shot learning) - Optimize loss functions and metrics for performance, manage hyperparameter tuning - Optimize training pipelines and debug training failures - Develop reproducible training/evaluation pipelines. Skills: Expertise in PyTorch/TensorFlow or other frameworks Strong Python skills (NumPy, SciPy, scikit-learn) and GPU acceleration (CUDA, cuDNN) Experience with ML experiment tracking (W&B, MLflow etc.) Experience with RL frameworks (Stable Baselines3, Ray RLlib) Passion: Building AI agents with superior capabilities Qualifications: Bachelor's or Master's degree in Data Science, Computer Science, or a related technical field. 4+ years of experience in machine learning and deep learning Prior roles within AI research teams Background in chemometrics, spectroscopy, or analytical chemistry, desirable Advantage: Networks involving very large matrix operations About Picarro: We are the world's leader in timely, trusted, and actionable data using enhanced optical spectroscopy. Our solutions are used in a wide variety of applications, including natural gas leak detection, ethylene oxide emissions monitoring, semiconductor fabrication, pharmaceutical, petrochemical, atmospheric science, air quality, greenhouse gas measurements, food safety, hydrology, ecology, and more. Our software and hardware are designed and manufactured in Santa Clara, California and are used in over 90 countries worldwide based on over 65 patents related to cavity ring-down spectroscopy (CRDS) technology and are unparalleled in their precision, ease of use, and reliability. All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, national origin, protected veteran status, gender identity, social orientation, nor on the basis of disability. Posted positions are not open to third party recruiters/agencies and unsolicited resume submissions will be considered free referrals. If you are an individual with a disability and require reasonable accommodation to complete any part of the application process or are limited in the ability or unable to access or use this online application process and need an alternative method for applying, you may contact Picarro, Inc. at disabilityassistance@picarro.com for assistance. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Responsibilities ML engineer: 3-5 years of experience as AI/ML engineer or similar role. Strong knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with model development and deployment processes. Proficiency in programming languages such as Python. Experience with data preprocessing, feature engineering, and model evaluation techniques. Familiarity with cloud platforms (e.g., AWS) and containerization (e.g., Docker, Kubernetes). Familiarity with version control systems (e.g., GitHub). Proficiency in data manipulation and analysis using libraries such as NumPy and Pandas. Good to have knowledge of deep learning, ML Ops: Kubeflow, MLFlow, Nextflow. Knowledge on text Analytics, NLP, Gen AI Mandatory Skill Sets Gen AI, Python Preferred Skill Sets Gen AI, Python Years Of Experience Required 3 - 5 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
CSQ326R223 Mission The Machine Learning (ML) Practice team is a highly specialized, collaborative customer-facing ML team at Databricks. We deliver professional services engagements to help our customers build, scale, and productionize the most cutting-edge ML and GenAI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We are investing heavily in continuing to grow our global team, and looking for a world-class Manager to lead our ML Practice in India. You will report directly to our Global Delivery Center (GDC) Leader and dotted line to our ML Professional Services Global Leader. This role can be remote, with a preference for candidates located in Bangalore. The Impact You Will Have Manage hiring, onboarding, and growing the ML Practice team in India, roughly doubling the team size by the end of the year. Develop relationships with key customers and partners, scope customer engagements, and manage escalations Align with Professional Services and Sales leaders both in the Global Delivery Center and globally Collaborate cross-functionally with the R&D team to define priorities and influence the product roadmap Lead strategic PS initiatives, practice development, and processes Own OKRs for revenue and utilization, with a focus on driving Databricks consumption Lead Databricks cultural values by example and champion Databricks brand What We Look For 3+ years experience managing, hiring, and building a team of motivated data scientists/ML engineers, including establishing programs and processes Deep hands-on technical understanding of data science, ML, GenAI and the latest trends While managers do not deliver customer engagements directly, we expect that individuals are able to get hands-on in the Databricks product to demo to customers, dogfood product features to provide input to R&D, etc. Experience building production-grade machine learning deployments on AWS, Azure, or GCP Passion for collaboration, life-long learning, and driving business value through ML Company first focus and collaborative individuals - we work better when we work together Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience [Preferred] Experience working with Databricks and Apache Spark [Preferred] Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less
Posted 1 week ago
5.0 years
3 - 9 Lacs
Hyderābād
On-site
Job Description: Job Purpose As an AI R&D Engineer, you will collaborate with NYSE’s central AI team to build and deploy cutting-edge generative AI applications. You’ll lead applied research and development efforts—working with large language models (LLMs), vector databases, and agentic frameworks—to enhance NYSE’s internal analytics and decision systems. We encourage and expect regular use of AI-assisted development tools (e.g. ChatGPT, GitHub Copilot) to accelerate iteration and improve team productivity. Responsibilities Design and deploy RAG pipelines using LangChain and vector databases Prototype and evaluate GenAI applications on NYSE data using commercial and open models Build microservices to serve LLMs via APIs; optimize for performance and cost Use AI assistants for code generation, testing, and documentation—refining outputs through expert review Track experiments via MLflow and deploy using CI/CD, Docker, and Kubernetes Define metrics for generative AI performance (e.g. latency, accuracy, user satisfaction) Share findings and lead technical discussions with NYC stakeholders Knowledge and Experience 5+ years in software development, 2+ with GenAI or NLP Proficiency in Python (FastAPI, asyncio); strong with Hugging Face, OpenAI APIs, LangChain Experience with vector databases (e.g. Pinecone, Weaviate) and MLflow Strong judgment in using GenAI tools like ChatGPT and Copilot productively Excellent written and oral English communication Familiarity with financial-market data (e.g. order books, trading events) Preferred Knowledge and Experience M.S. or Ph.D. in ML, AI, or related field Peer-reviewed publications in related fields
Posted 1 week ago
10.0 years
10 Lacs
Hyderābād
On-site
Job Description: Job Purpose As an AI R&D Engineer, you will collaborate with NYSE’s central AI team to build and deploy cutting-edge generative AI applications. You’ll lead applied research and development efforts—working with large language models (LLMs), vector databases, and agentic frameworks—to enhance NYSE’s internal analytics and decision systems. We encourage and expect regular use of AI-assisted development tools (e.g. ChatGPT, GitHub Copilot) to accelerate iteration and improve team productivity. Responsibilities Design and deploy RAG pipelines using LangChain and vector databases Prototype and evaluate GenAI applications on NYSE data using commercial and open models Build microservices to serve LLMs via APIs; optimize for performance and cost Use AI assistants for code generation, testing, and documentation—refining outputs through expert review Track experiments via MLflow and deploy using CI/CD, Docker, and Kubernetes Define metrics for generative AI performance (e.g. latency, accuracy, user satisfaction) Share findings and lead technical discussions with NYC stakeholders Knowledge and Experience 10+ years in software development, 3+ with GenAI or NLP Proficiency in Python (FastAPI, asyncio); strong with Hugging Face, OpenAI APIs, LangChain Experience with vector databases (e.g. Pinecone, Weaviate) and MLflow Strong judgment in using GenAI tools like ChatGPT and Copilot productively Excellent written and oral English communication Familiarity with financial-market data (e.g. order books, trading events) Desired Knowledge and Experience M.S. or Ph.D. in ML, AI, or related field Peer-reviewed publications in related fields
Posted 1 week ago
5.0 years
50 Lacs
Dehradun, Uttarakhand, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
3 Lacs
Bengaluru
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 5 years Location: Bengaluru, India, Karnataka JobType: full-time We are seeking a Big Data Engineer with deep technical expertise to join our fast-paced, data-driven team. In this role, you will be responsible for designing and building robust, scalable, and high-performance data pipelines that fuel real-time analytics, business intelligence, and machine learning applications across the organization. If you thrive on working with large datasets, cutting-edge technologies, and solving complex data engineering challenges, this is the opportunity for you. What You’ll Do Design & Build Pipelines : Develop efficient, reliable, and scalable data pipelines that process large volumes of structured and unstructured data using big data tools. Distributed Data Processing : Leverage the Hadoop ecosystem (HDFS, Hive, MapReduce) to manage and transform massive datasets. Starburst (Trino) Integration : Design and optimize federated queries using Starburst, enabling seamless access across diverse data platforms. Databricks Lakehouse Development : Utilize Spark, Delta Lake, and MLflow on the Databricks Lakehouse Platform to enable unified analytics and AI workloads. Data Modeling & Architecture : Work with stakeholders to translate business requirements into flexible, scalable data models and architecture. Performance & Optimization : Monitor, troubleshoot, and fine-tune pipeline performance to ensure efficiency, reliability, and data integrity. Security & Compliance : Implement and enforce best practices for data privacy, security, and compliance with global regulations like GDPR and CCPA. Collaboration : Partner with data scientists, product teams, and business users to deliver impactful data solutions and improve decision-making. What You Bring Must-Have Skills 5+ years of hands-on experience in big data engineering, data platform development, or similar roles. Strong experience with Hadoop , including HDFS, Hive, HBase, and MapReduce. Deep understanding and practical use of Starburst (Trino) or Presto for large-scale querying. Hands-on experience with Databricks Lakehouse Platform , Spark, and Delta Lake. Proficient in SQL and programming languages like Python or Scala . Strong knowledge of data warehousing, ETL/ELT workflows, and schema design. Familiarity with CI/CD tools, version control (Git), and workflow orchestration tools (Airflow or similar). Nice-to-Have Skills Experience with cloud environments such as AWS , Azure , or GCP . Exposure to Docker , Kubernetes , or infrastructure-as-code tools. Understanding of data governance and metadata management platforms. Experience supporting AI/ML initiatives with curated datasets and pipelines.
Posted 1 week ago
5.0 years
50 Lacs
Cuttack, Odisha, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
50 Lacs
Kolkata, West Bengal, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
50 Lacs
Guwahati, Assam, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
50 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
7.0 - 10.0 years
20 - 30 Lacs
Bangalore Rural, Bengaluru
Work from Office
"We're Hiring For Generative Ai Engineer Role at Bangalore Location" Position: Generative Ai Engineer Experience: 7+ Years Location: Bangalore Responsibilities Develop and deploy scalable big data and AI solutions using Databricks and Azure. Implement RAG pipelines and integrate GenAI APIs for document and image retrieval. Perform finetuning of LLM models and manage prompt engineering workflows. Ensure the end-to-end solution is optimized for performance and scalability. Collaborate with software and ML teams for integration of AI capabilities. Deploy GenAI projects in production in Azure and Databricks. (Must) Required Qualifications Bachelors or Masters degree in Computer Science or related field. 7+ years of experience in big data and AI development. Strong experience with Databricks, Pyspark, and Python. Experience in deploying GenAI projects in production. Preferred Qualifications Familiarity with RAG architecture and document-based retrieval systems. Experience with Azure OpenAI, LangChain, Pinecone or similar tools. Experience with deploying LLM, VLM Models in Cloud Experience with MLOps tools such as MLFlow or similar tools. Technologies Used Python, Pyspark, Databricks, Azure ML, LangChain, OpenAI API, Pinecone, MLFlow More information: +91 73597 10155 | rushit@tekpillar.com
Posted 1 week ago
5.0 years
50 Lacs
Kochi, Kerala, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
50 Lacs
Visakhapatnam, Andhra Pradesh, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
50 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
50 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
50 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
5.0 years
50 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.
These cities are known for their thriving tech industries and have a high demand for mlflow professionals.
The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
Salaries may vary based on factors such as location, company size, and specific job requirements.
A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager
With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.
In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)
Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.
As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.