Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
500.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About us We are surrounded by the world's leading consumer companies led by technology - Amazon for retail, Airbnb for hospitality, Uber for mobility, Netflix and Spotify for entertainment, etc. Food & Beverage is the only consumer sector where large players are still traditional restaurant companies. At Rebel Foods, we are challenging this status quo as we are building the world's most valuable restaurant company on the internet, superfast. The opportunity for us is immense due to the exponential growth in the food delivery business worldwide which has helped us build 'The World's Largest Internet Restaurant Company' in the last few years. Rebel Foods' current presence is India, Indonesia, UAE, UK & KSA with close to 50 brands and 4500+ internet restaurants has been built on The Rebel Operating System . While for us it is still Day 1, we know we are in the middle of a revolution towards creating never seen before customer-first experiences. We bring you a once-in-a-lifetime opportunity to disrupt the 500-year-old industry with technology at its core. Job Description We are looking f or a Data Scienti st-II to join our Data Science & Analytics (DSA) Team. You will get to work with other data scientists, analysts, data engineers, PMs and business teams on some of the toughest and most interesting problems in our industry. Our work spans across areas of inventory forecasting, order predictions, marketing spend optimization, delivery time prediction, personalization, customer insighting, product recommendations, supply chain planning, capacity optimization, and more. The role will involve understanding business problems, designing relevant ML solutions, building & deploying new models, maintaining & enhancing existing models, and automating performance measurement of the same. We are looking for someone who is excited to work in a fast-growing industry, has a spirit of ownership & collaboration, and loves seeing the impact of what they have built at scale! What our team ownsAt Rebel, the Data Science & Analytics team builds and maintains all ML models, rule-based intelligence, automated reports and dashboards used by the organization. We are on a mission to drive business impact via ML and Data Science and grow adoption of these areas at Rebel. We also want to enable the organization to better understand and effectively use ML and DS. We interact with teams across every business area such as Supply, Operations, Demand Generation, D2C, Finance, Brands, Customer Delight, Central Planning and many more. We are never short of interesting challenges and we balance delivering continuous impact and great CX, while building strong foundations to drive sustainable ML returns overtime. Specific Qualifications Strong working knowledge of a variety of machine learning techniques (Regression, Classification,Clustering, Decision Tree, Probability Networks, Neural Networks, Bayesian models etc.) Proficient in Python, SQL, and machine learning libraries (e.g., TensorFlow,scikit-learn). Experience in working with varied data types (structured and unstructured) and databases Experience with Machine Learning and related technologies such as Python, Tensorflow, Amazon SageMaker Excellent problem-solving skills with a demonstrated ability to think analytically and strategically. Good-to-have Experience solving NLP-based problems or working with LLMs / prompt engineering Working with non-tech stakeholders and explaining ML performance in simple language A love of food (Yes, we have regular tasting sessions at office!) Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description - External Role – AIML Data Scientist Location : Kochi Mode of Interview - In Person Date : 14th June 2025 (Saturday) Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less
Posted 3 weeks ago
3.0 - 8.0 years
10 - 18 Lacs
Kolkata, Hyderabad, Pune
Work from Office
JD is below: Design, develop, and deploy generative AI based applications using AWS Bedrock. Proficiency in prompt engineering and RAG pipeline Experience in building Agentic Generative AI applications Fine-tune and optimize foundation models from AWS Bedrock for various use cases. Integrate generative AI capabilities into enterprise applications and workflows. Collaborate with cross-functional teams, including data scientists, ML engineers, and software developers, to implement AI-powered solutions. Utilize AWS services (S3, Lambda, SageMaker, etc.) to build scalable AI solutions. Develop APIs and interfaces to enable seamless interaction with AI models. Monitor model performance, conduct A/B testing, and enhance AI-driven products. Ensure compliance with AI ethics, governance, and security best practices. Stay up-to-date with advancements in generative AI and AWS cloud technologies. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, AI, Machine Learning, or a related field. 3+ years of experience in AI/ML development, with a focus on generative AI. Hands-on experience with AWS Bedrock and foundation models. Proficiency in Python and ML frameworks. Experience with AWS services such as SageMaker, Lambda, API Gateway, DynamoDB, and S3. Experience with prompt engineering, model fine-tuning, and inference optimization. Familiarity with MLOps practices and CI/CD pipelines for AI deployment. Ability to work with large-scale datasets and optimize AI models for performance. Excellent problem-solving skills and ability to work in an agile environment. Preferred Qualifications: AWS Certified Machine Learning – Specialty or equivalent certification. Experience in LLMOps and model lifecycle management. Knowledge of multi-modal AI models (text, image, video generation). Hands-on experience with other cloud AI platforms (Google Vertex AI, Azure OpenAI). Strong understanding of ethical AI principles and bias mitigation techniques.
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
Job Title : Senior AI/ML Engineer Experience : 5+ Years Location : Remote Job Type - Contractual Job Summary: We are seeking a highly motivated and experienced Senior AI/ML Engineer to join our remote team. In this role, you will design, develop, and deploy cutting-edge AI and machine learning solutions across cloud platforms. The ideal candidate has a passion for innovation, stays up to date with the latest developments in AI, and has hands-on experience with modern AI/ML tools, libraries, and agentic workflows. You will work closely with cross-functional teams to solve complex business problems and drive intelligent automation and decision-making. Key Responsibilities: AI/ML Model Development : Design, build, and deploy robust, scalable machine learning and deep learning models using Python, SQL, NoSQL, and standard ML libraries (e.g., Pandas, Scikit-learn, PyTorch, TensorFlow). Cloud Integration : Leverage cloud-based platforms such as AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, and OpenAI to develop and manage end-to-end AI/ML solutions. Model Deployment & Lifecycle Management : Automate deployment pipelines and monitor ML models in production for performance, drift, and retraining needs. Fine-Tuning & Customization : Fine-tune foundation models and LLMs for domain-specific applications using techniques like transfer learning, LoRA, and prompt engineering. Agentic AI Workflows : Implement and optimize agentic workflows involving task-oriented agents and multi-step reasoning pipelines. Architecture & Optimization : Architect data-driven solutions by applying advanced ML, statistical modeling, and deep learning techniques to solve real-world business challenges. Research & Innovation : Stay current with the latest advancements in AI, including GenAI, multi-modal models, RLHF, and experiment with new frameworks and technologies. Cross-Functional Collaboration : Work closely with data scientists, data engineers, product managers, and business stakeholders to translate business requirements into technical solutions. Required Skills & Qualifications: Programming : Expert-level proficiency in Python and SQL; experience working with NoSQL databases. ML/DL Libraries : Strong hands-on experience with Scikit-learn, Pandas, NumPy, TensorFlow, PyTorch. Cloud Platforms : Proven experience with at least one major cloud provider (AWS/GCP/Azure), especially with ML tools like SageMaker, Vertex AI, Bedrock, Azure AI Studio, or OpenAI. Model Deployment : Experience with building, deploying, and monitoring models in production environments. LLMs & Fine-Tuning : Practical experience fine-tuning and integrating large language models (LLMs) using HuggingFace, LangChain, etc. Agentic AI : Experience with autonomous agents, LangChain Agents, AutoGPT, or similar agentic frameworks. Data Engineering : Ability to work with structured and unstructured data, data preprocessing, feature engineering, and pipeline creation. Soft Skills : Strong communication, collaboration, and problem-solving skills; ability to thrive in a fast-paced remote environment. Preferred Qualifications: Experience with RESTful APIs, Docker, and MLOps tools. Familiarity with generative AI tools and multimodal systems. Understanding of CI/CD for ML pipelines and model monitoring. Contributions to open-source AI/ML projects or published research. Why Join Us? Remote-first culture and flexible work hours Opportunity to work with the latest AI/ML technologies Challenging projects with global clients Competitive compensation and benefits Learning-centric environment with access to cloud credits, courses, and conferences Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Job Title: AI Engineer Job Type: Full-time, Contractor Location: Remote About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary Join our customer's team as an AI Engineer and play a pivotal role in shaping next-generation AI solutions. You will leverage cutting-edge technologies such as GenAI, LLMs, RAG, and LangChain to develop scalable, innovative models and systems. This is a unique opportunity for someone who is passionate about rapidly advancing their AI expertise and thrives in a collaborative, remote-first environment. Key Responsibilities Design and develop advanced AI models and algorithms using GenAI, LLMs, RAG, LangChain, LangGraph, and AI Agent frameworks. Implement, deploy, and optimize AI solutions on Amazon SageMaker. Collaborate cross-functionally to integrate AI models into existing platforms and workflows. Continuously evaluate the latest AI research and tools to ensure leading-edge technology adoption. Document processes, experiments, and model performance with clear and concise written communication. Troubleshoot, refine, and scale deployed AI solutions for efficiency and reliability. Engage proactively with the customer's team to understand business needs and deliver value-driven AI innovations. Required Skills and Qualifications Proven hands-on experience with GenAI, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) techniques. Strong proficiency in frameworks such as LangChain, LangGraph, and building/resolving AI Agents. Demonstrated expertise in deploying and managing AI/ML solutions on AWS SageMaker. Exceptional written and verbal communication skills, with the ability to explain complex concepts to diverse audiences. Ability and eagerness to rapidly learn, adapt, and apply new AI tools and techniques as the field evolves. Background in software engineering, computer science, or a related technical discipline. Strong problem-solving skills accompanied by a collaborative and proactive mindset. Preferred Qualifications Experience working with remote or distributed teams across multiple time zones. Familiarity with prompt engineering and orchestration of complex AI agent pipelines. A portfolio of successfully deployed GenAI solutions in production environments. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Mohali, Punjab
On-site
Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
Job Summary: We are looking for a skilled and innovative Machine Learning Engineer to develop, implement, and optimize intelligent systems that leverage data to drive business decisions and enhance product functionality. The ideal candidate will have strong programming skills, a solid understanding of machine learning algorithms, and experience in deploying models into production environments. Key Responsibilities: Design and develop scalable ML models and algorithms to solve real-world problems Analyze large and complex datasets to extract actionable insights Train, test, and validate models to ensure performance and accuracy Work closely with data engineers, product teams, and stakeholders to integrate ML models into applications Research and stay updated on state-of-the-art techniques in machine learning and AI Optimize models for speed, scalability, and interpretability Document processes, experiments, and results clearly Deploy models into production using MLOps tools and practices Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or related field Strong proficiency in Python and ML libraries like scikit-learn, TensorFlow, PyTorch, Keras Solid understanding of statistical modeling, classification, regression, clustering, and deep learning Experience with data handling tools (e.g., Pandas , NumPy ) and data visualization (e.g., Matplotlib , Seaborn ) Proficiency with SQL and working knowledge of big data technologies (e.g., Spark, Hadoop) Familiarity with cloud platforms (AWS, Azure, GCP) and MLOps tools (e.g., MLflow, SageMaker) Preferred Qualifications: Experience with NLP, computer vision, or recommendation systems Knowledge of Docker, Kubernetes, and CI/CD for model deployment Published research or contributions to open-source ML projects Exposure to agile environments and collaborative workflows Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As An Engineer You Will Be Responsible For Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About The Team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A2992237 Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Company Description CloudMoyo is an award-winning and data-driven engineering firm with deep expertise in analytics, application development, digital strategies and generative AI solutions. Our goal is to envision and develop solutions that reinvigorate businesses and build their best futures by propelling digital transformation with resilience. We work alongside various partners, like Microsoft and Icertis, to bring forward robust, inventive, and scalable solutions tailored to your business needs. Our expertise is founded on the efforts of our talented employees, as well as our FIRST with FORTE values we champion. FORTE means Fairness, Openness, Respect, Teamwork, and Execution. Our values here lead to open and honest conversations that allow for greater collaboration, leading to best-in-class execution that delights our customers. In 2025, we introduced FIRST with FORTE, with the goal to be a Fearless, Innovative, and Resilient organization with Substantial impacts while being a Trustworthy partner. We pride ourselves on being one of Seattle’s Best Places to Work for the past 6 years, earning the #1 rank in both 2021 and 2024! In 2021, we earned the Icertis Partner of the Year Award – FORTE Values, and in 2024 earned the Icertis Highest Delivery NPS Award. Interested in joining our team? Keep reading! Life at CloudMoyo Here at CloudMoyo, we are driven by our values of FORTE, which stands for Fairness, Openness, Respect, Teamwork, and Execution. We strongly believe that our expertise is founded on the efforts of our employees, who reflect our FORTE values in their work. In 2025, we introduced FIRST with FORTE. This addition to our values aligns with our goal to be a Fearless, Innovative, and Resilient organization with Substantial impacts while being a Trustworthy partner. It’s an extension of FORTE that focuses on our values as a larger organization, built on great employees. Our workplace culture is driven by unshakable commitment to building a world-class workplace for all employees, one characterized by meaningful interactions, flat hierarchy, challenging assignments, opportunities to grow with the best in the field, and exciting rewards and benefits. If you’re a talented, hard-working, and fun-loving person looking to grow, then CloudMoyo may be a great fit for your next professional adventure. Curious about what it’s like working at CloudMoyo? Hear from CloudMoyo employees on Glassdoor: check out the reviews. Working during COVID-19 We Responded To The Covid-19 And Its Impact On Our Lives And Businesses Alike, With a “4 Rings Of Responsibility” Approach. CloudMoyo Employees Worked 100% Remotely During Covid-19. However, We Have Now Adopted a Hybrid Work Environment Post-Covid. Our Four Rings Of Responsibility Include Take Care of Self Take Care of Family Take Care of Community Take Care of Business The Covid-19 pandemic also changed the way we view health and wellness, and from our Four Rings of Responsibility came our WellNest initiative. WellNest emphasizes employees to #TakeCareofSelf, ensuring wellbeing at a physical, emotional, and psychological level. WellNest provides avenues to indulge, collaborate as teams, and help those around you maintain their wellbeing, whether that’s pursuing a new hobby, attending a solo experience, or exploring the world with your family. Job Description We are looking for a candidate with 5+ years in Machine learning & good exposure to python/R/cognitive services that will help us to find insights from big data and build Machine learning algorithms for our machine learning systems. Your primary focus will be in applying data mining techniques, doing statistical analysis, and writing python/R codes for developing Machine learning algorithms. Responsibilities Requirement gathering, solutioning & client interaction Translate requirement into AI/ML solution Leading & mentoring a team of Data Scientists Data mining over varied data sources Identify key evaluation metrics Processing, cleansing, verifying integrity of data used for analysis Writing Python/R codes to use state of the art techniques for building and optimizing classifiers using machine learning techniques Ad-hoc analysis and results presentation in cleaner manner Qualifications Experience with common data science libraries such as Numpy, Pandas etc. Experience in Text data used NLP is mandatory. Experience scripting and programming skills with Python/R/Cognitive services. Frameworks: Keras, Scikit-learn, Tensorflow, Pandas, Numpy, MatplotLib, Seaborn (these are mandatory) Knowledge of MLOPs and deployment pipelines products like AWS sagemaker, Azure ML. Understanding of Machine learning techniques and algorithms and Natural Language Processing techniques Knowledge of data wrangling skills using SQL or NOSQL queries Understanding of large language models (LLM’s) & prompt engineering that cater to multiple tasks such a text generation, Q&A, summarization, translation etc. Experience with big data, data visualization libraries will be appreciated. Should have experience on at least 5 ML algorithms Experience on tuning ML models for accuracy & F1 score Certifications in advance analytics skills is good to have Good communication and inter-personal skills with a bachelor/masters degree in computer science. Must have actual project/product development experience using the above listed skills Additional Information Why Join Us? Opportunity to lead and shape the growth of a critical business practice. Collaborative and supportive work environment focused on innovation and excellence. Competitive compensation package with performance-based incentives. Comprehensive benefits, and professional development opportunities. If you are a motivated sales leader passionate about delivering top-notch staffing solutions, we invite you to join our team and make a lasting impact! Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Cloud AI Engineer The opportunity We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. We are looking for a talented and motivated AI Engineer to join our team and work alongside our AI Architect. The ideal candidate will have a strong background in AI/ML, data engineering, and cloud technologies, with a focus on AI & Generative AI (GenAI) technologies. You will be responsible for developing, deploying, and optimizing AI models and solutions, ensuring they meet the performance, scalability, and security requirements of our organization. EY Digital Engineering is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. The Digital Engineering practice works with clients to analyse, formulate, design, mobilize and drive digital transformation initiatives. We advise clients on their most pressing digital challenges and opportunities surround business strategy, customer, growth, profit optimization, innovation, technology strategy, and digital transformation. We also have a unique ability to help our clients translate strategy into actionable technical design, and transformation planning/mobilization. Through our unique combination of competencies and solutions, EY’s DE team helps our clients sustain competitive advantage and profitability by developing strategies to stay ahead of the rapid pace of change and disruption and supporting the execution of complex transformations. Your Key Responsibilities AI/ML Model Development Develop and train machine learning models using frameworks such as Autogen, PydanticAI, Langchain, TensorFlow, PyTorch, Scikit-learn. Leverage large language models (LLMs) and work on cloud LLM deployments. Build AI agents and have a solid understanding of agentic frameworks. Implement and fine-tune AI models for various business applications. AI/ML Deployment and Optimization Deploy machine learning models on cloud platforms (e.g., AWS, Azure, GCP). Optimize AI pipelines for both real-time and batch processing. Understand concepts around model fine-tuning, distillation, and optimization. Monitor and maintain the performance of deployed models, ensuring they meet business requirements. Cloud Integration Integrate AI/ML models with existing cloud infrastructure. Utilize cloud services (e.g., AWS SageMaker, Azure AI, GCP AI Hub) to manage AI workloads. Ensure compliance with data privacy and security standards. Collaboration and Support Work closely with the AI Architect to design scalable AI solutions. Collaborate with cross-functional teams, including data scientists, engineers, and business stakeholders. Provide technical support and troubleshooting for AI-related issues. Continuous Learning and Innovation Stay updated on the latest advancements in AI/ML technologies. Experiment with new AI tools and frameworks to enhance existing solutions. Contribute to the adoption of best practices for AI model lifecycle management. Skills And Attributes For Success Proficiency in AI/ML frameworks such as Autogen, PydanticAI, Langchain, TensorFlow, PyTorch, Scikit-learn. Experience with cloud platforms: AWS, Azure, GCP. Understanding of data engineering, ETL pipelines, and big data tools (e.g., Apache Spark, Hadoop). Hands-on experience with containerization and orchestration tools (Docker, Kubernetes). Knowledge of DevOps practices for AI/ML (MLOps). Strong understanding of deep learning models, including DNN, LSTM, Transformers, RL, and GNN. Experience with Generative AI technologies, including LLMs, model fine-tuning, distillation, and optimization. Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Ability to work in a fast-paced, collaborative environment. Preferred Qualifications: Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Azure AI Engineer) are a plus. Experience: At least 5+ years in AI-related roles Proven experience in developing and deploying AI solutions with Python, JavaScript Strong background in machine learning, deep learning, and data modelling. Experience in integrating AI models with cloud infrastructure. Agile Methodologies: Familiarity with Agile development practices and methodologies. Education: Bachelor’s or master’s degree in computer science, Engineering, or a related field. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
India
On-site
Looking for Python Gen AI Engineer with total 5 years of experience, immediate joiners preferred. Qualification required o Proficiency in Python programming with a strong grasp of syntax and advanced features. o Advanced expertise in Python and SQL for backend development and database management. o A strong foundation in Machine Learning (ML) and Deep Learning (DL) frameworks such as Scikit-learn, TensorFlow, and PyTorch, enabling the development of robust predictive models. o Knowledge of large language models (LLMs), the Retrieval-augmented generation (RAG) technique, and AI Agents o Competence in data manipulation techniques using libraries such as Pandas/DuckDb and NumPy for handling large datasets. o Proficiency in FastAPI and/or Flask for API driven development o Proficiency in using Linux operating systems. o Familiarity with Docker for containerization and deployment. o Familiarity with cloud platforms (preferably AWS) for scalable AI solutions. o Knowledge of Git for version control, including branching, merging, and resolving conflicts in a collaborative development environment. o Proficiency in developing, deploying, and monitoring end-to-end projects and services utilizing AWS Bedrock LLM models on AWS cloud environments (e.g., EC2 instances, AWS Bedrock, and SageMaker). o Experience in deploying and scaling AI solutions using CI/CD, ensuring they are robust, efficient, and capable of handling real-world workloads. o Versatility in working with diverse data sources, including structured databases, unstructured text, PDFs, and other formats, showcasing adaptability and problem-solving skills Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less
Posted 3 weeks ago
6.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Software Engineer – Backend (Python) Experience-6 to 8 Years Location- Pan India (Hybrid) Notice Period- Immediate-30 Days Must have skills: • Experience with web development frameworks such as Flask, Django or FastAPI. • Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. • Experience with concurrent programming designs such as AsyncIO. • Experience with unit and functional testing frameworks. • Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. • Experience with CI/CD practices , tools, and frameworks. Nice to have skills: • Experience with Apache Kafka and developing Kafka client applications in Python. • Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. • Experience with big data processing frameworks, preferably Apache Spark. • Experience with containers (Docker) and container platorms like AWS ECS or AWS Show more Show less
Posted 3 weeks ago
5.0 - 10.0 years
0 Lacs
India
On-site
About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Company Overview At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Senior Software Engineer to play a critical role in driving positive change. Position overview We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in implementing data workflows, building platforms to enable self-serve for ML pipelines while enabling seamless deployments. Responsibilities Platform Engineering Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows. Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines. Real-Time ML Deployment Implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks. Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic. Data Engineering Build and optimise ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas. Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment. Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB. CI/CD and Automation Use CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing. Automate data validation and monitoring processes to ensure high-quality and consistent data workflows. Documentation and Collaboration Create and maintain detailed technical documentation, including high-level and low-level architecture designs. Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals. Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira. Experience Required Qualifications 5-10 years experience in IT 5-8 years experience in platform backend engineering 1 year experience in DevOps & data engineering roles. Hands-on experience with real-time ML model deployment and data engineering workflows. Technical Skills Strong expertise in Python and experience with Pandas, PySpark, and FastAPI. Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker. Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3. Proven experience building and optimizing distributed data pipelines using Databricks and PySpark. Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL. Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks. Hands-on experience with observability tools like New Relic for monitoring and troubleshooting. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less
Posted 3 weeks ago
4.0 - 6.0 years
25 - 30 Lacs
Bengaluru
Work from Office
3+ years of work experience in Python programming for AI/ML, deep learning, and Generative AI model development Proficiency in TensorFlow/PyTorch, Hugging Face Transformers and Langchain libraries Hands-on experience with NLP, LLM prompt design and fine-tuning, embeddings, vector databases and agentic frameworks Strong understanding of ML algorithms, probability and optimization techniques 6+ years of experience in deploying models with Docker, Kubernetes, and cloud services (AWS Bedrock, SageMaker, GCP Vertex AI) through APIs, and using MLOps and CI/CD pipelines Familiarity with retrieval-augmented generation (RAG), cache-augmented generation (CAG), retrieval-integrated generation (RIG), low-rank adaptation (LoRA) fine-tuning Ability to write scalable, production-ready ML code and optimized model inference Experience with developing ML pipelines for text classification, summarization and chat agents Prior experience with SQL and noSQL databases, and Snowflake/Databricks
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
Technical Skills & Expertise Programming: Expert-level Python (5+ years) – pandas, NumPy, Scikit-learn, FastAPI Strong in SQL and NoSQL (MongoDB, DynamoDB) Machine Learning & Deep Learning: Hands-on with Scikit-learn, PyTorch, and TensorFlow Fine-tuning LLMs using frameworks like HuggingFace Transformers Experience with Agentic AI workflows (e.g., LangChain, AutoGPT) Cloud Platforms: AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio Model deployment using managed services and containers (Docker, ECS, Cloud Run) CI/CD pipelines for ML (GitHub Actions, SageMaker Pipelines, Vertex Pipelines) OpenAI & Generative AI: Prompt engineering, retrieval-augmented generation (RAG), embeddings Integration with OpenAI APIs and model fine-tuning (Davinci, GPT-4-turbo, Gemini) Soft Skills & Mindset Strong research mindset and hunger to explore emerging trends (RLHF, SLMs, foundation models) Proven experience working in fast-paced, agile remote teams Architecture mindset – capable of designing scalable AI/ML systems Business-first thinking – aligns AI solutions with real-world outcomes Skills: sagemaker pipelines,ecs,pytorch,python,nosql,openai apis,sql,aws,vertex pipelines,prompt engineering,langchain,huggingface transformers,docker,scikit-learn, pytorch, and tensorflow,tensorflow,gcp,retrieval-augmented generation,azure,embeddings,scikit-learn,cloud run,autogpt,github actions Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AI/LLM Architect Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Lead and participate in collaborative solutioning sessions with business stakeholders, translating business requirements and challenges into well-defined machine learning/data science use cases and comprehensive AI solution specifications. Architect robust and scalable AI solutions that enable data-driven decision-making, leveraging a deep understanding of statistical modeling, machine learning, and deep learning techniques to forecast business outcomes and optimize performance. Design and implement data integration strategies to unify and streamline diverse data sources, creating a consistent and cohesive data landscape for AI model development. Develop efficient and programmatic methods for synthesizing large volumes of data, extracting relevant features, and preparing data for AI model training and validation. Leverage advanced feature engineering techniques and quantitative methods, including statistical modeling, machine learning, deep learning, and generative AI, to implement, validate, and optimize AI models for accuracy, reliability, and performance. Simplify data presentation to help stakeholders easily grasp insights and make informed decisions. Maintain a deep understanding of the latest advancements in AI and generative AI, including various model architectures, training methodologies, and evaluation metrics. Identify opportunities to leverage generative AI to securely and ethically address business needs, optimize existing processes, and drive innovation. Contribute to project management processes, providing regular status updates, and ensuring the timely delivery of high-quality AI solutions. Primarily responsible for contributing to project delivery and maximizing business impact through effective AI solution architecture and implementation. Occasionally contribute technical expertise during pre-sales engagements and support internal operational improvements as needed. What do you bring to the table? A bachelor's or master's degree in a quantitative field (e.g., Computer Science, Statistics, Mathematics, Engineering) is required. The ideal candidate will have a strong background in designing and implementing end-to-end AI/ML pipelines, including feature engineering, model training, and inference. Experience with Generative AI pipelines is needed. 8+ years of experience in AI/ML development, with at least 3+ years in an AI architecture role. Fluency in Python and SQL and noSQL is essential. Experience with common data science libraries such as pandas and Scikit-learn, as well as deep learning frameworks like PyTorch and TensorFlow, is required. Hands-on experience with cloud-based AI/ML platforms and tools, such as AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, or OpenAI, is a must. This includes experience with deploying and managing models in the cloud. Our Core Values People first. We create collaborative and supportive environments by operating with respect and flexibility to promote mental, emotional and physical health. We practice empathy by treating others the way they want to be treated and assuming positive intent. We are proud of our inclusive diverse team and humble ourselves to learn about and build our connection with each other. Patient focused. We act with swift determination without sacrificing our expectations of quality . We are driven by providing exceptional solutions for our customers to positively impact patient lives. Considering what is at stake, we challenge ourselves to develop the best solution, not just the easy one. Integrity. We hold ourselves accountable and strive for transparent communication to build trust amongst ourselves and our customers. We take ownership of our results as we know what we do matters and collectively we will change the healthcare industry. We are thoughtful and intentional with every customer interaction understanding the overall impact on human health. Curious. We ask questions and actively listen in order to learn and continuously improve . We embrace change and the opportunities it presents to make each other better. We strive to be on the cutting edge of science and technology innovation by encouraging creativity. Impactful. We take our social responsibility with the seriousness it deserves and hold ourselves to a high standard. We improve our sustainability by encouraging discussion and taking action as it relates to our natural, social and economic resource footprint. We are devoted to our humanitarian mission and look for new ways to make the world a better place. Velsera is an Equal Opportunity Employer: Velsera is proud to be an equal opportunity employer committed to providing employment opportunity regardless of sex, race, creed, colour, gender, religion, marital status, domestic partner status, age, national origin or ancestry. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Madison Logic: Our team is reshaping B2B marketing and having fun in the process! When joining Madison Logic, you are committing to giving 100% and always striving for more. As a truly global company, we take pride in a diverse culture free from gender, racial, and other forms of bias. Our Vision: We empower B2B organizations globally to convert their best accounts faster Our Values: URGENCY Lead with Action. Prioritize Follow-up. ACCOUNTABILITY Don't Point Fingers. Take Responsibility. INNOVATION Think Big. Innovate. RESPECT Respect Customers. Respect Each Other. INTEGRITY Act Ethically. Lead by Example. At ML you will work with & learn from an incredible group of people who care about your success as much as they care about their own. Our team is at the heart of what we do and our success starts with you! About the Role: We are currently seeking a Senior Data Engineer to play a crucial role in shaping the future of our data and analytics capabilities. As a Senior Data Engineer at Madison Logic, your responsibilities will include designing, constructing, and deploying scalable data pipelines. Additionally, you will be responsible for developing APIs that will empower our machine learning products and features. Your expertise will be invaluable in refining data models across various components of our data infrastructure to accommodate the growing demands of data processing and analytics at Madison Logic. In this highly collaborative position, you will closely collaborate with product, engineering, and data teams to achieve our business objectives. Responsibilities: Develop and maintain the core data pipelines, involving the creation of production-level SQL and Python code to fuel our platforms. Adapt and enhance data models and data schemas to align with both business and engineering requirements. Conduct data analysis to contribute to the enhancement of overall business performance. Identify and select optimal data sources for specific analytical tasks. Establish procedures for data mining, data modeling, and data production. Collaborate with internal and external partners to address challenges and ensure successful outcomes. Basic Qualifications: On-site working at the ML physical office, 5-days per week required Ability to work UK Shift Timing (11:00am – 8:00pm Local Time) Required Fluent in English language (verbal and written) and possessing a clear and concise communication style. Educational Background: Possess a Bachelor's degree in computer science, statistics, or mathematics. Programming Expertise: 5+ years of experience with Python, with the ability to write production-level code. SQL Proficiency: 5+ years of experience in SQL, with excellent skills in navigating multiple data tables and comprehending data models. Cloud Computing: 3+ years of hands-on experience with cloud computing services, particularly AWS (Amazon Web Services). Data Architecture: Proven experience in designing data architectures, including Kafka, Data Warehouses and Operational Data Stores (ODS). Cloud-Based Analytics: Possess a strong understanding of cloud-based analytics platforms, such as Snowflake and AWS SageMaker. Data Workflow Management: Ideally, possess at least 1 year of experience with data workflow management tools, with Airflow experience being a plus. Data Cleaning: Be skilled in data cleaning and standardization processes. SQL Engine: Exhibit an excellent understanding of SQL engines and the capability to perform advanced performance tuning. Desired Characteristics: Self-Sufficient and proactive nature, able & comfortable "figuring things out", resorting to escalation only when after exhausting all other options Strong sense of urgency required Exceptional communication skills, both verbal and written, with a knack for explaining complex concepts in a clear & concise manner across all levels and functions Team members are encouraged to work collaboratively with an emphasis on results, not on hierarchy or titles. India-Specific Benefits 5 LPA Medical Coverage Life Insurance Provident Fund Contributions Learning & Development Stipend (Over-And-Above CTC) Wellness Stipend (Over-And-Above CTC) Transportation available for female team-members with shifts starting or ending between the hours of 9:30pm and 7:00am Welcoming in-office environment (located within AWFIS co-working space, Amanora Mall) Team members are encouraged to work collaboratively with an emphasis on results, not on hierarchy or titles. Expected Compensation: (Dependent upon Experience) Fixed CTC: ₹23,00,000 - ₹27,00,000 a year Work Environment: We offer a mix of in-office and hybrid working. Hybrid remote work arrangements are not available for all positions. Please refer to the job posting detail to determine what in-office requirements apply. Where applicable , hybrid WFH days work must be conducted from your home office located in a jurisdiction in which Madison Logic has the legal right to operate. WFH requires availability and responsiveness on a full-time basis from a distraction free environment with access to high-speed internet. Please inquire for more details. Pay Transparency/Equity: We are committed to paying our team equitably for their work, commensurate with their individual skills and experience . Salary Range and additional compensation, including discretionary bonuses and incentive pay, are determined by a rigorous review process taking into account the experience, education, certifications and skills required for the specific role, equity with similarly situated team members, as well as employer-verified region-specific market data provided by an independent 3rd party partner. We will provide more information about our perks & benefits upon request. Our Commitment to Diversity & Inclusion: Madison Logic is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of sex, race, color, religion, national origin, sexual orientation, age, marital status, disability, gender identity or Veteran status. Privacy Disclosure: All of the information collected in this form and/or by your application by submission of your online profile is necessary and relevant to the performance of the job applied for. We will process the information provided by you in this form, your CV (including physical and online resume profiles), by the referees you have noted, and by the educational institutions with whom we may undertake to verify your qualifications with, in accordance with our privacy policy and for recruitment purposes only. For more information on how we process the information you have provided including relevant lawful bases (where relevant) please see our privacy policy which is available on our website ( https://www.madisonlogic.com/privacy/ ). Show more Show less
Posted 3 weeks ago
7.0 - 10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a talented and versatile Analytics & AI Specialist to join our dynamic team. This role combines expertise in General Analytics , Artificial Intelligence (AI) , Generative AI (GenAI) , forecasting techniques , and client management to deliver innovative solutions that drive business success. The ideal candidate will work closely with clients, leverage AI technologies to enhance data-driven decision-making, and apply forecasting models to predict business trends and outcomes. Responsibilities: Data Analytics & Insights : Conduct advanced data analysis across various datasets, identify patterns, trends, and insights that provide actionable business intelligence. Work with cross-functional teams to interpret business problems and develop data-driven strategies for performance optimization. Design and implement interactive dashboards and reports using data visualization tools like Power BI , AI & Machine Learning : Apply AI and machine learning techniques (e.g., supervised learning , unsupervised learning , deep learning ) to solve complex business problems. Develop, train, and deploy machine learning models for predictive analysis, classification, and recommendation systems. Utilize Generative AI (GenAI) models to create synthetic data, automate processes, and develop AI-driven content or solutions. Forecasting & Predictive Modeling : Build and maintain statistical forecasting models (e.g., ARIMA , Exponential Smoothing , Prophet ) and machine learning-based models to predict future trends and business outcomes. Perform demand forecasting, sales forecasting, and trend analysis for business planning and resource allocation. Continuously refine models based on historical data, seasonality, and external factors to improve prediction accuracy. Client Management & Consulting : Serve as a trusted advisor to clients, understanding their business needs and providing tailored data-driven solutions to improve performance. Communicate technical concepts and results to non-technical stakeholders in a clear, actionable manner. Develop presentations and reports for clients, highlighting insights, recommendations, and the value of AI-driven solutions. Collaborate with clients to define KPIs, metrics, and objectives for analytics projects and ensure the delivery of high-quality outcomes. AI Strategy & Innovation : Stay updated on the latest advancements in AI , Generative AI , and machine learning techniques, integrating these technologies into business solutions. Identify new opportunities to apply AI and analytics to improve efficiency, productivity, and customer experience for clients. Assist clients in scaling their AI initiatives, from proof-of-concept to full-scale deployment, and guide them through the AI lifecycle. Key Technical Skills: General Analytics : Strong knowledge of statistical analysis , data visualization , and data wrangling techniques. Proficiency with analytics tools such as Excel , R , Python (pandas, NumPy), and SQL for querying and analyzing large datasets. AI & Machine Learning : Experience with machine learning frameworks and libraries (e.g., TensorFlow , scikit-learn , PyTorch , Keras ). Knowledge of Generative AI (GenAI) tools and technologies, including GPT models , GANs (Generative Adversarial Networks) , and transformer models . Familiarity with AI cloud platforms (e.g., Google AI , AWS SageMaker , Azure AI ). Forecasting : Expertise in time series forecasting methods (e.g., ARIMA , Exponential Smoothing , Prophet ) and machine learning-based forecasting models. Experience applying predictive analytics and building forecasting models for demand, sales, and resource planning. Data Visualization & Reporting : Expertise in creating interactive reports and dashboards with tools like Tableau , Power BI , or Google Data Studio . Ability to present complex analytics and forecasting results in a clear and compelling way to stakeholders. Client Management & Communication : Strong client-facing skills with the ability to manage relationships and communicate complex technical concepts to non-technical audiences. Ability to consult and guide clients on best practices for implementing AI-driven solutions. Excellent written and verbal communication skills for client presentations, technical documentation, and report writing. Desired Qualifications: Bachelor’s or Master’s degree in Computer Science , Data Science , Statistics , Engineering , or a related field. 7-10 years of experience in data analytics , AI , forecasting , and client management , preferably in a consulting or client-facing role. Proven experience in implementing AI models, forecasting solutions, and data-driven strategies in a business context. A passion for continuous learning and staying updated with the latest trends in AI and machine learning technologies. Show more Show less
Posted 3 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About TwoSD (2SD Technologies Limited) TwoSD is the innovation engine of 2SD Technologies Limited , a global leader in product engineering, platform development, and advanced IT solutions. Backed by two decades of leadership in technology, our team brings together strategy, design, and data to craft transformative solutions for global clients. Our culture is built around cultivating talent, curiosity, and collaboration. Whether you're a career technologist, a self-taught coder, or a domain expert with a passion for real-world impact, TwoSD is where your journey accelerates. Join us and thrive. At 2SD Technologies, we push past the expected—with insight, integrity, and a passion for making things better. Role Overview We are hiring a Solution Architect with a proven track record in SaaS platform architecture , AI-driven solutions , and CRM/enterprise systems like Microsoft Dynamics 365 . This is a full-time position based in Gurugram, India , for professionals who thrive on solving complex problems across cloud, data, and application layers. You’ll design and orchestrate large-scale platforms that blend intelligence , automation , and multi-tenant scalability —powering real-time customer experiences, operational agility, and cross-system connectivity. Key Responsibilities Architect cloud-native SaaS solutions with scalability, modularity, and resilience at the core Design end-to-end technical architectures spanning CRM systems , custom apps , AI services , and data pipelines Lead technical discovery, solution workshops, and architecture governance with internal and client teams Drive the integration of Microsoft Dynamics 365 with other platforms including AI/ML services and business applications Create architectural blueprints and frameworks for microservices, event-driven systems, and intelligent automation Collaborate with engineers, data scientists, UX/UI designers, and DevOps teams to deliver platform excellence Oversee security, identity, compliance, and performance in high-scale environments Evaluate and introduce modern tools, frameworks, and architectural patterns for enterprise innovation Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s is a plus) 7+ years of experience in enterprise application architecture Hands-on expertise in Microsoft Dynamics 365 CE/CRM with complex integrations Experience architecting and delivering SaaS applications on cloud platforms (preferably AWS/Azure/GCP) Familiarity with LLM APIs , AI orchestration tools , or machine learning workflows Proven ability to lead multi-team and multi-technology architecture efforts Deep understanding of security , multi-tenancy , data privacy , and compliance standards Preferred Qualifications Microsoft Certified: Dynamics 365 + Azure/AWS Architect certifications Experience with AI platform components like OpenAI, LangChain, or Azure/AWS Services Experience designing or re-architecting legacy monoliths into cloud-native microservices Familiarity with DevOps and Infrastructure as Code (IaC) practices using Terraform or Bicep Experience integrating event-based systems using AWS, Azure Event Grid, Service Bus, or Kafka Exposure to enterprise observability tools and performance monitoring strategies Core Competencies Enterprise SaaS Architecture Cloud-Native Platform Design (Azure preferred) CRM + AI Integration Strategy End-to-End System Thinking Cross-Functional Collaboration & Mentorship Future-Proof Solution Design & Documentation Tools & Platforms CRM/ERP: Microsoft Dynamics 365 CE, Power Platform, Dataverse AI & Data: OpenAI, AWS SageMaker, AWS Bedrock, Azure Cognitive Services, LangChain, MLflow Cloud: Azure (App Services, API Management, Logic Apps, Functions, Cosmos DB) DevOps & IaC: GitHub Actions, Azure DevOps, Terraform, Bicep Integration: REST/GraphQL APIs, Azure Service Bus, Event Grid, Kafka Modeling & Docs: Lucidchart, Draw.io, ArchiMate, PlantUML Agile & Collaboration: Jira, Confluence, Slack, MS Teams Why Join TwoSD? At TwoSD , innovation isn’t a department—it’s a mindset. Here, your voice matters, your expertise is valued, and your growth is supported by a collaborative culture that blends mentorship with autonomy. With access to cutting-edge tools, meaningful projects, and a global knowledge network, you’ll do work that counts—and evolve with every challenge. Solution Architect – SaaS Platforms, AI Solutions & Enterprise CRM Position: Solution Architect Location: Gurugram, India (Onsite/Hybrid) Company: TwoSD (2SD Technologies Limited) Industry: Enterprise Software / CRM / Cloud Platforms Employment Type: Permanent Date Posted: 26 May 2025 How to Apply To apply, send your resume and technical portfolio or project overview to hr@2sdtechnologies.com or visit our LinkedIn careers page. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 3 years Location: Bengaluru JobType: full-time Requirements About the Role We are seeking a passionate and skilled AI Engineer to join our innovative engineering team. In this role, you will play a pivotal part in designing, developing, and deploying cutting-edge artificial intelligence solutions with a focus on natural language processing (NLP) , computer vision , and machine learning models using TensorFlow and related frameworks. You will work on challenging projects that leverage large-scale data, deep learning, and advanced AI techniques, helping transform business problems into smart, automated, and scalable solutions. If you're someone who thrives in a fast-paced, tech-driven environment and loves solving real-world problems with AI, we'd love to hear from you. Key Responsibilities Design, develop, train, and deploy AI/ML models using frameworks such as TensorFlow, Keras, and PyTorch. Implement solutions across NLP, computer vision, and deep learning domains, using advanced techniques such as transformers, CNNs, LSTMs, OCR, image classification, and object detection. Collaborate closely with product managers, data scientists, and software engineers to identify use cases, define architecture, and integrate AI solutions into products. Optimize model performance for speed, accuracy, and scalability, using industry best practices in model tuning, validation, and A/B testing. Deploy AI models to cloud platforms such as AWS, GCP, and Azure, leveraging their native AI/ML services for efficient and reliable operation. Stay up to date with the latest AI research, trends, and technologies, and propose how they can be applied within the company's context. Ensure model explainability, reproducibility, and compliance with ethical AI standards. Contribute to the development of MLOps pipelines for managing model versioning, CI/CD for ML, and monitoring deployed models in production. Required Skills & Qualifications 3+ years of hands-on experience building and deploying AI/ML models in production environments. Proficiency in TensorFlow and deep learning workflows; experience with PyTorch is a plus. Strong foundation in natural language processing (e.g., NER, text classification, sentiment analysis, transformers) and computer vision (e.g., image processing, object recognition). Experience deploying and managing AI models on AWS, Google Cloud Platform (GCP), and Microsoft Azure. Skilled in Python and relevant libraries such as NumPy, Pandas, OpenCV, Scikit-learn, Hugging Face Transformers, etc. Familiarity with model deployment tools such as TensorFlow Serving, Docker, and Kubernetes. Experience working in cross-functional teams and agile environments. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or related field. Preferred Qualifications Experience with MLOps tools and pipelines (MLflow, Kubeflow, SageMaker, etc.). Knowledge of data privacy and ethical AI practices. Exposure to edge AI or real-time inference systems. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Company Description MindBrain is a dynamic software company that integrates innovation, education, and strategic workforce solutions. As pioneers in cutting-edge solutions, we shape the future of technology. Our dedication to education empowers individuals with the skills needed to lead in a rapidly evolving landscape. We also connect businesses with the right talent at the right time to drive success through impactful collaborations. Role Description This is a contract remote role for a Senior Python Developer at MindBrain. The Senior Python Developer will be responsible for back-end web development, software development, object-oriented programming (OOP), programming, and databases. Qualifications Fluency in Python and SQL and noSQL is essential. Experience with common data science libraries such as pandas and Scikit-learn, as well as deep learning frameworks like PyTorch and TensorFlow, is required. Hands-on experience with cloud-based AI/ML platforms and tools, such as AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, or OpenAI, is a must. This includes experience with deploying and managing models in the cloud. Experience with fine tuning models with different technologies. Experience with Agentic AI workflows. Hungry to learn and stay adopt with all the advancements in AI world Architect robust and scalable AI solutions that enable data-driven decision-making, leveraging a deep understanding of statistical modeling, machine learning, and deep learning techniques to forecast business outcomes and optimize performance. Show more Show less
Posted 3 weeks ago
7.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Candidate Profile Previous experience in building data science / algorithms based products is big advantage. Experience in handling healthcare data is desired. Educational Qualification Bachelors / Masters in computer science / Data Science or related subjects from reputable institution Typical Experience 7-9 years experience of industry experience in developing data science models and solutions. Able to quickly pick up new programming languages, technologies, and frameworks Strong understanding of data structures and algorithms Proven track record of implementing end to end data science modelling projects, providing the guidance and thought leadership to the team. Strong experience in a consulting environment with a do it yourself attitude. Primary Responsibility As a Data science lead you will be responsible to lead a team of analysts and data scientists / engineers and deliver end to end solutions for pharmaceutical clients. Is expected to participate in client proposal discussions with senior stakeholders and provide thought leadership for technical solution. Should be expert in all phases of model development (EDA, Hypothesis, Feature creation, Dimension reduction, Data set clean-up, Training models, Model selection, Validation and Deployment) Should have deep understanding of statistical & machine learning methods ((logistic regression, SVM, decision tree, random forest, neural network), Regression (linear regression, decision tree, random forest, neural network), Classical optimisation (gradient descent etc), Must have thorough mathematical knowledge of correlation/causation, classification, recommenders, probability, stochastic processes, NLP, and how to implement them to a business problem. Should be able to help implement ML models in a optimized , sustainable framework. Expected to gain business understanding in health care domain order to come up with relevant analytics use cases. (E.g. HEOR / RWE / Claims Data Analysis) Expected to keep the team up to date on latest and great in the world on ML and AI. Technical Skill and Expertise Expert level proficiency in programming language Python/SQL. Working knowledge of Relational SQL and NoSQL databases, including Postgres, Redshift Extensive knowledge of predictive & machine learning models in order to lead the team in implementation of such techniques in real world scenarios. Working knowledge of NLP techniques and using BERT transformer models to solve complicated text heavy data structures. Working knowledge of Deep learning & unsupervised learning. Well versed with Data structures, Pre-processing , Feature engineering, Sampling techniques. Good statistical knowledge to be able analyse data. Exposure to open source tools & working on cloud platforms like AWS and Azure and being able to use their tools like Athena, Sagemaker, machine learning libraries is a must. Exposure to AI tools LLM models Llama (ChatGPT, Bard) and prompt engineering is added advantage Exposure to visualization tools like Tableau, PowerBI is an added advantage Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Lead Consultant - AI Platform Engineer Career Level -E Introduction to role We are looking for a Senior AI Platform Engineer to join our new AI platform team in Enterprise AI platform team, IGNITE. The ideal candidate will have proven experience working in AWS cloud environments, where they devise and deploy large-scale production infrastructure and platforms. The position will involve applying these skills to some of the most exciting machine-learning problems in drug discovery. The successful candidate will be part of a new, close-knit team of deeply technical experts and have the chance to create tools that will advance the standard of healthcare, improving the lives of millions of patients across the globe. Our data science environments will support major AI initiatives such as clinical trial data analysis, knowledge graphs, imaging & Omics for our therapy areas. You will also have a responsibility to help provide the frameworks for data scientists to develop scalable machine learning and predictive models with our growing data science community, safely and robustly. As a strong software leader and an expert in building sophisticated systems, you will be responsible for inventing how we use technology, machine learning, and data to enable AstraZeneca's productivity. You will help envision, build, deploy, and develop our next generation of data engines and tools at scale. You will bridge the gap between science and engineering and function with deep expertise in both worlds! You will also have the opportunity to learn many cutting-edge technologies related to Machine Learning Platforms. You will push the boundaries to test, develop, and implement new ideas, technologies, and opportunities. Accountabilities Provide the vital infrastructure and platform to support the deployment and monitoring of ML solutions in production Optimizing solutions for performance and scalability. Collaborate closely with data science teams in developing cutting edge data science, AI/ML environments and workflows on AWS. Liaise with R&D data scientists to understand their challenges and work with them to help productionise ML pipelines, models, and algorithms for innovative science. Take responsibility for all aspects of software engineering, from design to implementation, QA and maintenance. Lead technology processes from concept development to completion of project deliverables. Liaise with other teams to enhance our technological stack, to enable the adoption of the latest advances in Data Processing and AI. Team recruitment, training provision and coaching. Essential Skills/Experience Significant experience with AWS cloud environments is essential. Knowledge of SageMaker, Athena, S3, EC2, RDS, Glue, Lambda, Step functions, EKS, and ECS is also essential. Certification in appropriate areas will be viewed favourably. Modern DevOps mindset, using best-of-breed DevOps toolchains, such as Docker and Git. Experience with infrastructure as code technology such as Ansible, Terraform and Cloud Formation. Strong software coding skills, with proficiency in Python, however exceptional ability in any language, will be recognized. Experience managing an enterprise platform and service, handling new customer demand and feature requests. Experience with containers and microservice architectures e.g., Kubernetes, Docker and serverless approaches. Experience with Continuous Integration and building continuous delivery pipelines, such as CodePipeline, CodeBuild, and Code Deploy. GxP experience. Excellent communication, analytical and problem-solving skills. Desirable Skills/Experience Experience building large scale data processing pipelines. e.g., Hadoop/Spark and SQL. Use of Data Science modelling tools e.g., R, Python and Data Science notebooks (e.g., Jupyter). Multi cloud experience (AWS/Azure/GCP). Demonstrable knowledge of building MLOPs environments to a production standard. Experience on mentoring, coaching and supporting less experienced colleagues and clients. Experience with SAFe agile principles and practices. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. Join a team with the backing and investment to win! You'll be working with cutting-edge technology. This marriage between our purposeful work and the use of high-tech platforms is what sets us apart. Lead the way in digital healthcare. From exploring data and AI to working in the cloud on new technologies. Join a team at the forefront. Help shape and define the technologies of the future, with the backing you need from across the business. Ready to make a difference? Apply now! Show more Show less
Posted 3 weeks ago
0.0 - 3.0 years
0 Lacs
Sukhlia, Indore, Madhya Pradesh
Remote
Job Title: AWS & DevOps Engineer Department: DevOps Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Advantal Technologies is looking for a skilled AWS & DevOps Engineer to help build and manage the cloud infrastructure. This role involves designing scalable infrastructure, automating deployments, enforcing security, and supporting a hybrid (AWS + open-source) deployment strategy. Key Responsibilities: AWS Cloud Infrastructure: · Design, provision, and manage secure and scalable cloud architecture on AWS. · Configure and manage core services: VPC, EC2, S3, RDS (PostgreSQL), Lambda, CloudFront, Cognito, and IAM. · Deploy AI models using Amazon SageMaker for inference at scale. · Manage API integrations via Amazon API Gateway and AWS WAF. DevOps & Automation: · Implement CI/CD pipelines using AWS CodePipeline, GitHub Actions, or GitLab CI. · Containerize backend applications using Docker and orchestrate with AWS ECS/Fargate or Kubernetes (for on-prem/hybrid). · Use Terraform or AWS CloudFormation for Infrastructure as Code (IaC). · Monitor applications using CloudWatch, Security Hub, and CloudTrail. Security & Compliance: · Implement IAM policies and KMS key management, and enforce Zero Trust architecture. · Configure S3 object lock, audit logs, and data classification controls. · Support GDPR/HIPAA-ready compliance setup via AWS Config, GuardDuty, and Security Hub. Required Skills & Experience: Must-Have · 3–5 years of hands-on experience in AWS infrastructure and services. · Proficiency with Terraform, CloudFormation, or other IaC tools. · Experience with Docker, CI/CD pipelines, and cloud networking (VPC, NAT, Route 53). · Strong understanding of DevSecOps principles and AWS security best practices. · Experience supporting production-grade SaaS applications. Nice-to-Have: · Exposure to AI/ML model deployment (especially via SageMaker or containerized APIs). · Knowledge of multi-tenant SaaS infrastructure patterns. · Experience with Vault, Keycloak, or open-source IAM/security stacks for non-AWS environments. · Familiarity with Kubernetes (EKS or self-hosted). Tools & Stack You'll Use: · AWS (Lambda, RDS, S3, SageMaker, Cognito, CloudFront, CloudWatch, API Gateway) · Terraform, Docker, GitHub Actions · CI/CD: GitHub, GitLab, AWS CodePipeline · Monitoring: CloudWatch, GuardDuty, Prometheus (non-AWS) · Security: KMS, IAM, Vault Please share resume to hr@advantal.net Job Types: Full-time, Permanent Pay: ₹261,624.08 - ₹1,126,628.25 per year Benefits: Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Ability to commute/relocate: Sukhlia, Indore, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred) Experience: AWS DevOps: 3 years (Required) Work Location: In person Speak with the employer +91 9131295441 Expected Start Date: 02/06/2025
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.
If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:
The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the sagemaker field, a typical career progression may look like this:
In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:
Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:
What is a SageMaker notebook instance?
Medium:
What is the difference between SageMaker Ground Truth and SageMaker Processing?
Advanced:
As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2