Home
Jobs
Companies
Resume

643 Sagemaker Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 3 days ago

Apply

3.0 - 4.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

Job Title : Data Scientist - Computer Vision & Generative AI. Location : Mumbai. Experience Level : 3 to 4 years. Employment Type : Full-time. Industry : Renewable Energy / Solar Services. Job Overview We are seeking a talented and motivated Data Scientist with a strong focus on computer vision, generative AI, and machine learning to join our growing team in the solar services sector. You will play a pivotal role in building AI-driven solutions that transform how solar infrastructure is analyzed, monitored, and optimized using image-based intelligence. From drone and satellite imagery to on-ground inspection photos, your work will enable intelligent automation, predictive analytics, and visual understanding in critical areas like fault detection, panel degradation, site monitoring, and more. If you're passionate about working at the cutting edge of AI for real-world sustainability impact, we'd love to hear from you. Key Responsibilities Design, develop, and deploy computer vision models for tasks such as object detection, classification, segmentation, anomaly detection, etc. Work with generative AI techniques (e.g. , GANs, diffusion models) to simulate environmental conditions, enhance datasets, or create synthetic training data. Build ML pipelines for end-to-end model training, validation, and deployment using Python and modern ML frameworks. Analyze drone, satellite, and on-site images to extract meaningful insights for solar panel performance, wear-and-tear detection, and layout optimization. Collaborate with cross-functional teams (engineering, field ops, product) to understand business needs and translate them into scalable AI solutions. Continuously experiment with the latest models, frameworks, and techniques to improve model performance and robustness. Optimize image pipelines for performance, scalability, and edge/cloud deployment. Key Requirements 3-4 years of hands-on experience in data science, with a strong portfolio of computer vision and ML projects. Proven expertise in Python and common data science libraries : NumPy, Pandas, Scikit-learn, etc. Proficiency with image-based AI frameworks : OpenCV, PyTorch or TensorFlow, Detectron2, YOLOv5/v8, MMDetection, etc. Experience with generative AI models like GANs, Stable Diffusion, or ControlNet for image generation or augmentation. Experience building and deploying ML models using MLflow, TorchServe, or TensorFlow Serving. Familiarity with image annotation tools (e.g. , CVAT, Labelbox), and data versioning tools (e.g. , DVC). Experience with cloud platforms (AWS, GCP, or Azure) for storage, training, or model deployment. Experience with Docker, Git, and CI/CD pipelines for reproducible ML workflows. Ability to write clean, modular code and a solid understanding of software engineering best practices in AI/ML projects. Strong problem-solving skills, curiosity, and ability to work independently in a fast-paced environment. Bonus / Preferred Skills Experience with remote sensing and working with satellite or drone imagery. Exposure to MLOps practices and tools like Kubeflow, Airflow, or SageMaker Pipelines. Knowledge of solar technologies, photovoltaic systems, or renewable energy is a plus. Familiarity with edge computing for vision applications on IoT devices or drones. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Title : AI/ML Engineer. Company : Cyfuture India Pvt.Ltd. Industry : IT Services and IT Consulting. Location : Sector 81, NSEZ, Noida (5 Days Work From Office). About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise Cloud Computing & Deployment : Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments. Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Machine Learning & Deep Learning Strong command of frameworks : TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing : Apache Spark, Dask, Ray. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. Resource Optimization Efficient use of compute resources : GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description DATA SCIENTIST Job Summary This Data Scientist creates and implements advanced analytics models and solutions to yield predictive and prescriptive insights from large volumes of structured and unstructured data. This position works with a team responsible for the research and implementation of predictive requirements by leveraging industry standard machine learning and data visualization tools to draw insights that empower confident decisions and product creation. This position leverages emerging tools and technologies available in On-prem and Cloud environments. This position utilizes industry standard machine learning and data visualization tools to transform data and analytics requirements into predictive solutions and provide data literacy on a range of machine learning systems at UPS. This position identifies opportunities for driving descriptive to predictive and prescriptive solutions, which become inputs to department and project teams on their decisions supporting projects. Responsibilities Defines key data sources from UPS and external sources to deliver models. Develops and implements pipelines that facilitates data cleansing, data transformations, data enrichments from multiple sources (internal and external) that serve as inputs for data and analytics systems. For larger teams, works with data engineering teams to validate and test data and model pipelines identified during proof of concepts. Develops data design based on the exploratory analysis of large amounts of data to discover trends and patterns that meet stated business needs. Defines model key performance indicator (KPI) expectations and validation, testing, and re-training of existing models to meet business objectives. Reviews and creates repeatable solutions through written project documentation, process flowcharts, logs, and commented clean code to produce datasets that can be used in analytics and/or predictive modeling. Synthesizes insights and documents findings through clear and concise presentations and reports to stakeholders. Presents operationalized analytic findings and provides recommendations. Incorporates best practices on the use of statistical modeling, machine learning algorithms, distributed computing, cloud-based AI technologies, and run time performance tuning with the goal of deployment and market introduction. Leverages emerging tools and technologies together with the use of open-source or vendor products in the creation and delivery of insights that support predictive and prescriptive solutions. Qualifications Expertise in R, SQL, Python and/or any other high-level languages. Exploratory data analysis (EDA), data engineering and development of advanced analytics models. Experience in development of AI and ML using platforms like VertexAI, Databricks or Sagemaker, and familiarity with available frameworks like PyTorch, Tensorflow and Keras. Experience applying models from small to medium scaled problems. Strong analytical skills and attention to detail. Able to engage key business and executive-level stakeholders to translate business problems to a high-level analytics solution approach. Expertise with statistical techniques, machine learning, and/or operations research and their application in business. Deep understanding of data management pipelines and experience in launching moderate scale advanced analytics projects in production. Demonstrated experience in Cloud-AI technologies and knowledge of environments both in Linux/Unix and Windows. Experience implementing open-source technologies and cloud services; with or without the use of enterprise data science platforms. Core AI / Machine Learning knowledge and application in supervised and unsupervised learning domains. Familiarity with Java or C++ is a plus. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audiences. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Last Day Posted - 2/25/2024 Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste DATA SCIENTIST Job Summary This Data Scientist creates and implements advanced analytics models and solutions to yield predictive and prescriptive insights from large volumes of structured and unstructured data. This position works with a team responsible for the research and implementation of predictive requirements by leveraging industry standard machine learning and data visualization tools to draw insights that empower confident decisions and product creation. This position leverages emerging tools and technologies available in On-prem and Cloud environments. This position utilizes industry standard machine learning and data visualization tools to transform data and analytics requirements into predictive solutions and provide data literacy on a range of machine learning systems at UPS. This position identifies opportunities for driving descriptive to predictive and prescriptive solutions, which become inputs to department and project teams on their decisions supporting projects. Responsibilities Defines key data sources from UPS and external sources to deliver models. Develops and implements pipelines that facilitates data cleansing, data transformations, data enrichments from multiple sources (internal and external) that serve as inputs for data and analytics systems. For larger teams, works with data engineering teams to validate and test data and model pipelines identified during proof of concepts. Develops data design based on the exploratory analysis of large amounts of data to discover trends and patterns that meet stated business needs. Defines model key performance indicator (KPI) expectations and validation, testing, and re-training of existing models to meet business objectives. Reviews and creates repeatable solutions through written project documentation, process flowcharts, logs, and commented clean code to produce datasets that can be used in analytics and/or predictive modeling. Synthesizes insights and documents findings through clear and concise presentations and reports to stakeholders. Presents operationalized analytic findings and provides recommendations. Incorporates best practices on the use of statistical modeling, machine learning algorithms, distributed computing, cloud-based AI technologies, and run time performance tuning with the goal of deployment and market introduction. Leverages emerging tools and technologies together with the use of open-source or vendor products in the creation and delivery of insights that support predictive and prescriptive solutions. Qualifications Expertise in R, SQL, Python and/or any other high-level languages. Exploratory data analysis (EDA), data engineering and development of advanced analytics models. Experience in development of AI and ML using platforms like VertexAI, Databricks or Sagemaker, and familiarity with available frameworks like PyTorch, Tensorflow and Keras. Experience applying models from small to medium scaled problems. Strong analytical skills and attention to detail. Able to engage key business and executive-level stakeholders to translate business problems to a high-level analytics solution approach. Expertise with statistical techniques, machine learning, and/or operations research and their application in business. Deep understanding of data management pipelines and experience in launching moderate scale advanced analytics projects in production. Demonstrated experience in Cloud-AI technologies and knowledge of environments both in Linux/Unix and Windows. Experience implementing open-source technologies and cloud services; with or without the use of enterprise data science platforms. Core AI / Machine Learning knowledge and application in supervised and unsupervised learning domains. Familiarity with Java or C++ is a plus. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audiences. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Last Day Posted - 2/25/2024 Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description DATA SCIENTIST Job Summary This Data Scientist creates and implements advanced analytics models and solutions to yield predictive and prescriptive insights from large volumes of structured and unstructured data. This position works with a team responsible for the research and implementation of predictive requirements by leveraging industry standard machine learning and data visualization tools to draw insights that empower confident decisions and product creation. This position leverages emerging tools and technologies available in On-prem and Cloud environments. This position utilizes industry standard machine learning and data visualization tools to transform data and analytics requirements into predictive solutions and provide data literacy on a range of machine learning systems at UPS. This position identifies opportunities for driving descriptive to predictive and prescriptive solutions, which become inputs to department and project teams on their decisions supporting projects. Responsibilities Defines key data sources from UPS and external sources to deliver models. Develops and implements pipelines that facilitates data cleansing, data transformations, data enrichments from multiple sources (internal and external) that serve as inputs for data and analytics systems. For larger teams, works with data engineering teams to validate and test data and model pipelines identified during proof of concepts. Develops data design based on the exploratory analysis of large amounts of data to discover trends and patterns that meet stated business needs. Defines model key performance indicator (KPI) expectations and validation, testing, and re-training of existing models to meet business objectives. Reviews and creates repeatable solutions through written project documentation, process flowcharts, logs, and commented clean code to produce datasets that can be used in analytics and/or predictive modeling. Synthesizes insights and documents findings through clear and concise presentations and reports to stakeholders. Presents operationalized analytic findings and provides recommendations. Incorporates best practices on the use of statistical modeling, machine learning algorithms, distributed computing, cloud-based AI technologies, and run time performance tuning with the goal of deployment and market introduction. Leverages emerging tools and technologies together with the use of open-source or vendor products in the creation and delivery of insights that support predictive and prescriptive solutions. Qualifications Expertise in R, SQL, Python and/or any other high-level languages. Exploratory data analysis (EDA), data engineering and development of advanced analytics models. Experience in development of AI and ML using platforms like VertexAI, Databricks or Sagemaker, and familiarity with available frameworks like PyTorch, Tensorflow and Keras. Experience applying models from small to medium scaled problems. Strong analytical skills and attention to detail. Able to engage key business and executive-level stakeholders to translate business problems to a high-level analytics solution approach. Expertise with statistical techniques, machine learning, and/or operations research and their application in business. Deep understanding of data management pipelines and experience in launching moderate scale advanced analytics projects in production. Demonstrated experience in Cloud-AI technologies and knowledge of environments both in Linux/Unix and Windows. Experience implementing open-source technologies and cloud services; with or without the use of enterprise data science platforms. Core AI / Machine Learning knowledge and application in supervised and unsupervised learning domains. Familiarity with Java or C++ is a plus. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audiences. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Last Day Posted - 2/25/2024 Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Machine Learning Engineer In this role, you’ll be driving and embedding the deployment, automation, maintenance and monitoring of machine learning models and algorithms Day-to-day, you’ll make sure that models and algorithms work effectively in a production environment while promoting data literacy education with business stakeholders If you see opportunities where others see challenges, you’ll find that this solutions-driven role will be your chance to solve new problems and enjoy excellent career development What you’ll do Your daily responsibilities will include you collaborating with colleagues to design and develop advanced machine learning products which power our group for our customers. You’ll also codify and automate complex machine learning model productions, including pipeline optimisation. We’ll expect you to transform advanced data science prototypes and apply machine learning algorithms and tools. You’ll also plan, manage, and deliver larger or complex projects, involving a variety of colleagues and teams across our business. You’ll Also Be Responsible For Understanding the complex requirements and needs of business stakeholders, developing good relationships and how machine learning solutions can support our business strategy Working with colleagues to productionise machine learning models, including pipeline design and development and testing and deployment, so the original intent is carried over to production Creating frameworks to ensure robust monitoring of machine learning models within a production environment, making sure they deliver quality and performance Understanding and addressing any shortfalls, for instance, through retraining Leading direct reports and wider teams in an Agile way within multi-disciplinary data and analytics teams to achieve agreed project and Scrum outcomes The skills you’ll need To be successful in this role, you’ll need to have a good academic background in a STEM discipline, such as Mathematics, Physics, Engineering or Computer Science. You’ll also have the ability to use data to solve business problems, from hypotheses through to resolution. We’ll look to you to have experience of at least twelve years with machine learning on large datasets, as well as experience building, testing, supporting, and deploying advanced machine learning models into a production environment using modern CI/CD tools, including git, TeamCity and CodeDeploy. You’ll Also Need A good understanding of machine learning approaches and algorithms such as supervised or unsupervised learning, deep learning, NLP with a strong focus on model development, deployment, and optimization Experience using Python with libraries such as NumPy, Pandas, Scikit-learn, and TensorFlow or PyTorch An understanding of PySpark for distributed data processing and manipulation with AWS (Amazon Web Services) including EC2, S3, Lambda, SageMaker, and other cloud tools. Experience with data processing frameworks such as Apache Kafka, Apache Airflow and containerization technologies such as Docker and orchestration tools such as Kubernetes Experience of building GenAI solutions to automate workflows to improve productivity and efficiency Show more Show less

Posted 3 days ago

Apply

12.0 - 18.0 years

0 Lacs

Tamil Nadu, India

Remote

Linkedin logo

Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments. The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure. As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices. You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services. Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments. Responsibilities AI-Powered Software Development & API Integration Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization. Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making. Data Engineering & AI Model Performance Optimization Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake. Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings). Microservices, APIs & Security Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication. Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices. AI & Data Engineering Collaboration Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions. Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation. Cross-Functional Coordination and Communication Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2). Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency. Mentorship & Knowledge Sharing Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions. Education & Experience Required 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions. Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles. Bachelor’s Degree or equivalent in Computer Science, Engineering, Data Science, or a related field. Proficiency in full-stack development with expertise in Python (preferred for AI), Java Experience with structured & unstructured data: SQL (PostgreSQL, MySQL, SQL Server) NoSQL (OpenSearch, Redis, Elasticsearch) Vector Databases (FAISS, Pinecone, ChromaDB) Cloud & AI Infrastructure AWS: Lambda, SageMaker, ECS, S3 Azure: Azure OpenAI, ML Studio GenAI Frameworks & Tools: OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI. Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization. Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment. Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams. Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption. About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Chennai / Bangalore / Hyderabad Who We Are Tiger Analytics is a global leader in AI and analytics, helping Fortune 1000 companies solve their toughest challenges. We offer full-stack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty and move forward decisively. Our purpose is to provide certainty to shape a better tomorrow. Our team of 4000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare. Many of our team leaders rank in Top 10 and 40 Under 40 lists, exemplifying our dedication to innovation and excellence. We are a Great Place to Work-Certified™ (2022-24), recognized by analyst firms such as Forrester, Gartner, HFS, Everest, ISG and others. We have been ranked among the ‘Best’ and ‘Fastest Growing’ analytics firms lists by Inc., Financial Times, Economic Times and Analytics India Magazine. Curious about the role? What your typical day would look like? We are looking for a Senior Analyst or Machine Learning Engineer who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. What do we expect? 3+ years of experience with at least 1+ years of relevant DS experience. Proficient in a structured Python, Pyspark, Machine Learning (Experience in productionizing models) Proficient in AWS cloud technologies is mandatory Experience and good understanding with Sagemaker/Data Bricks Experience in MLOPS frameworks (e.g Mlflow or Kubeflow) Follows good software engineering practices and has an interest in building reliable and robust software. Good understanding of DS concepts and DS model lifecycle. Working knowledge of Linux or Unix environments ideally in a cloud environment. Model deployment / model monitoring experience (Preferably in Banking Domain) CI/CD pipeline creation is good to have Excellent written and verbal communication skills B.Tech from Tier-1 college / M.S or M. Tech is preferred You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry. Additional Benefits: Health insurance (self & family), virtual wellness platform, and knowledge communities. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key Words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required Experience And Qualifications Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage. Show more Show less

Posted 3 days ago

Apply

15.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

About The Role We are seeking a highly experienced Principal Presales Architect with deep expertise in AWS cloud services to lead strategic engagements with enterprise customers. This role is at the intersection of technology leadership and customer engagement, requiring a deep understanding of IaaS, PaaS, SaaS , and data platform services , with a focus on delivering business value through cloud adoption and digital transformation. You will be a key contributor to the sales and solutioning lifecycle, working alongside business development, account executives, product, and engineering teams. This role also involves driving cloud-native architectures , conducting deep technical workshops, and influencing executive stakeholders. Key Responsibilities Presales & Customer Engagement Act as the technical lead in strategic sales opportunities, supporting cloud transformation deals across verticals. Design and present end-to-end cloud solutions tailored to client needs, with a focus on AWS architectures (compute, networking, storage, databases, analytics, security, and DevOps). Deliver technical presentations, POCs, and solution workshops to executive and technical stakeholders. Collaborate with sales teams to develop proposals, RFP responses, solution roadmaps , and TCO/ROI analysis . Drive early-stage discovery sessions to identify business objectives, technical requirements, and success metrics. Own the solution blueprint and ensure alignment across technical, business, and operational teams. Architecture & Technology Leadership Architect scalable, secure, and cost-effective solutions using AWS services including EC2, Lambda, S3, RDS, Redshift, EKS, and others. Lead design of data platforms and AI/ML pipelines , leveraging AWS services like Redshift, SageMaker, Glue, Athena, EMR , and integrating with 3rd party tools when needed. Evaluate and recommend multi-cloud integration strategies (Azure/GCP experience is a strong plus). Guide customers on cloud migration, modernization, DevOps, and CI/CD pipelines . Collaborate with product and delivery teams to align proposed solutions with delivery capabilities and innovations. Stay current with industry trends, emerging technologies , and AWS service releases , integrating new capabilities into customer solutions. Required Skills & Qualifications Technical Expertise 15+ years in enterprise IT or architecture roles, with 10+ years in cloud solutioning/presales , primarily focused on AWS. In-depth knowledge of AWS IaaS/PaaS/SaaS , including services across compute, storage, networking, databases, security, AI/ML, and observability. Hands-on experience in architecting and deploying data lake/data warehouse solutions using Redshift , Glue, Lake Formation, and other data ecosystem components. Proficiency in designing AI/ML solutions using SageMaker , Bedrock, TensorFlow, PyTorch, or equivalent frameworks. Understanding of multi-cloud architectures and hybrid cloud solutions; hands-on experience with Azure or GCP is an advantage. Strong command of solution architecture best practices , cost optimization , cloud security , and compliance frameworks. Presales & Consulting Skills Proven success in technical sales roles involving complex cloud solutions and data platforms . Strong ability to influence C-level executives and technical stakeholders . Excellent communication, presentation, and storytelling skills to articulate complex technical solutions in business terms. Experience with proposal development, RFx responses, and pricing strategy . Strong analytical and problem-solving capabilities with a customer-first mindset. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary We are seeking a highly experienced and customer-focused Presales Architect to join our Solution Engineering team. The ideal candidate will have a strong background in AWS IaaS, PaaS, and SaaS services , deep expertise in cloud architecture , and solid exposure to data platforms , including Amazon Redshift , AI/ML workloads , and modern data architectures . Familiarity with Azure and Google Cloud Platform (GCP) is a strong advantage. This role is a strategic blend of technical solutioning , customer engagement , and sales support , playing a critical role in the pre-sales cycle by understanding customer requirements, designing innovative solutions, and aligning them with the company’s service offerings. Key Responsibilities Pre-Sales and Solutioning: Engage with enterprise customers to understand their technical requirements and business objectives. Architect end-to-end cloud solutions on AWS , covering compute, storage, networking, DevOps, and security. Develop compelling solution proposals, high-level designs, and reference architectures that address customer needs. Support RFI/RFP responses, create technical documentation, and deliver presentations and demos to technical and non-technical audiences. Collaborate with Sales, Delivery, and Product teams to ensure alignment of proposed solutions with client expectations. Conduct technical workshops, proof of concepts (PoCs), and technical validations. Technical Expertise Deep hands-on knowledge and architecture experience with AWS services : IaaS: EC2, VPC, S3, EBS, ELB, Auto Scaling, etc. PaaS: RDS, Lambda, API Gateway, Fargate, DynamoDB, Aurora, Step Functions. SaaS & Security: AWS Organizations, IAM, AWS WAF, CloudTrail, GuardDuty. Understanding of multi-cloud strategies ; exposure to Azure and GCP cloud services including hybrid architectures is a plus. Strong knowledge of DevOps practices and tools like Terraform, CloudFormation, Jenkins, GitOps, etc. Proficiency in architecting solutions that meet scalability , availability , and security requirements. Data Platform & AI/ML Experience in designing data lakes , data pipelines , and analytics platforms on AWS. Hands-on expertise in Amazon Redshift , Athena , Glue , EMR , Kinesis , and S3-based architectures . Familiarity with AI/ML solutions using SageMaker , AWS Comprehend , or other ML frameworks. Understanding of data governance , data cataloging , and security best practices for analytics workloads. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 10+ years of experience in IT, with 5+ years in cloud architecture and pre-sales roles. AWS Certified Solutions Architect – Professional (or equivalent certification) is preferred. Strong presentation skills and experience interacting with CXOs, Architects, and DevOps teams. Ability to translate technical concepts into business value propositions. Excellent communication, proposal writing, and stakeholder management skills. Nice To Have Experience with Azure (e.g., Synapse, AKS, Azure ML) or GCP (e.g., BigQuery, Vertex AI) . Familiarity with industry-specific solutions (e.g., fintech, healthcare, retail cloud transformations). Exposure to AI/ML MLOps pipelines and orchestration tools like Kubeflow , MLflow , or Airflow . Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description We’re seeking a hands-on AI/ML Engineer with deep expertise in large language models, retrieval-augmented generation (RAG), and cloud-native ML development on AWS. You'll be a key driver in building scalable, intelligent learning systems powered by cutting-edge AI and robust AWS infrastructure. If you’re passionate about combining NLP, deep learning, and real-world application at scale—this is the role for you. 4+ years of specialized experience in AI/ML is required. Core Skills & Technologies LLM Ecosystem & APIs • OpenAI, Anthropic, Cohere • Hugging Face Transformers • LangChain, LlamaIndex (RAG orchestration) Vector Databases & Indexing • FAISS, Pinecone, Weaviate AWS-Native & ML Tooling • Amazon SageMaker (training, deployment, pipelines) • AWS Lambda (event-driven workflows) • Amazon Bedrock (foundation model access) • Amazon S3 (data lakes, model storage) • AWS Step Functions (workflow orchestration) • AWS API Gateway & IAM (secure ML endpoints) • CloudWatch, Athena, DynamoDB (monitoring, analytics, structured storage) Languages & ML Frameworks • Python (primary), PyTorch, TensorFlow • NLP, RAG systems, embeddings, prompt engineering What You’ll Do • Model Development & Tuning o Designs architecture for complex AI systems and makes strategic technical decisions o Evaluates and selects appropriate frameworks, techniques, and approaches o Fine-tune and deploy LLMs and custom models using AWS SageMaker o Build RAG pipelines with LlamaIndex/LangChain and vector search engines • Scalable AI Infrastructure o Architect distributed model training and inference pipelines on AWS o Design secure, efficient ML APIs with Lambda, API Gateway, and IAM • Product Integration o Leads development of novel solutions to challenging problems o Embed intelligent systems (tutoring agents, recommendation engines) into learning platforms using Bedrock, SageMaker, and AWS-hosted endpoints • Rapid Experimentation o Prototype multimodal and few-shot learning workflows using AWS services o Automate experimentation and A/B testing with Step Functions and SageMaker Pipelines • Data & Impact Analysis o Leverage S3, Athena, and CloudWatch to define metrics and continuously optimize AI performance • Cross-Team Collaboration o Work closely with educators, designers, and engineers to deliver AI features that enhance student learning o Mentors junior engineers and provides technical leadership Who You Are • Deeply Technical: Strong foundation in machine learning, deep learning, and NLP/LLMs • AWS-Fluent: Extensive experience with AWS ML services (especially SageMaker, Lambda, and Bedrock) • Product-Minded: You care about user experience and turning ML into real-world value • Startup-Savvy: Comfortable with ambiguity, fast iterations, and wearing many hats • Mission-Aligned: Passionate about education, human learning, and AI for good Bonus Points • Hands-on experience fine-tuning LLMs or building agentic systems using AWS • Open-source contributions in AI/ML or NLP communities • Familiarity with AWS security best practices (IAM, VPC, private endpoints) Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Position: L3 AWS Cloud Engineer Experience: 5+ Years Location: Mumbai Employment Type: Full-Time Job Summary We are seeking a highly skilled L3 AWS Cloud Engineer with 5+ years of experience to lead the design, implementation, and optimization of complex AWS cloud architectures. The candidate will have deep expertise in hybrid (Onprem to cloud) networking, AWS connectivity, and advanced AWS services like WAF, Shield, Advanced Shield, EKS, Data Services, and CloudFront CDN, ensuring enterprise-grade solutions. Key Responsibilities Architect and implement hybrid cloud solutions integrating on-premises and AWS environments. Design and manage advanced AWS networking (Direct Connect, Transit Gateway, VPN). Lead deployment and management of Kubernetes clusters using AWS EKS. Implement and optimize security solutions using AWS WAF, Shield, and Advanced Shield. Architect data solutions using AWS Data Services (Redshift, Glue, Athena). Optimize content delivery using AWS CloudFront and advanced CDN configurations. Drive automation of cloud infrastructure using IaC (CloudFormation, Terraform, CDK). Provide leadership in incident response, root cause analysis, and performance optimization. Mentor junior engineers and collaborate with cross-functional teams on cloud strategies. Required Skills and Qualifications 5+ years of experience in cloud engineering, with at least 4 years focused on AWS. Deep expertise in hybrid networking and connectivity (Direct Connect, Transit Gateway, Site-to-Site VPN). Advanced knowledge of AWS EKS for container orchestration and management. Proficiency in AWS security services (WAF, Shield, Advanced Shield, GuardDuty). Hands-on experience with AWS Data Services (Redshift, Glue, Athena,). Expertise in optimizing AWS CloudFront for global content delivery. Strong scripting skills (Python, Bash) and IaC expertise (CloudFormation, Terraform, CDK). Experience with advanced monitoring and analytics (CloudWatch,ELK). Experience with multi-region and multi-account AWS architectures. AWS Certified Solutions Architect – Professional Preferred Skills Knowledge of serverless frameworks and event-driven architectures. Familiarity with machine learning workflows on AWS (SageMaker, ML services). Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Summary We are seeking a highly experienced and customer-focused Presales Architect to join our Solution Engineering team. The ideal candidate will have a strong background in AWS IaaS, PaaS, and SaaS services , deep expertise in cloud architecture , and solid exposure to data platforms , including Amazon Redshift , AI/ML workloads , and modern data architectures . Familiarity with Azure and Google Cloud Platform (GCP) is a strong advantage. This role is a strategic blend of technical solutioning , customer engagement , and sales support , playing a critical role in the pre-sales cycle by understanding customer requirements, designing innovative solutions, and aligning them with the company’s service offerings. Key Responsibilities Pre-Sales and Solutioning: Engage with enterprise customers to understand their technical requirements and business objectives. Architect end-to-end cloud solutions on AWS , covering compute, storage, networking, DevOps, and security. Develop compelling solution proposals, high-level designs, and reference architectures that address customer needs. Support RFI/RFP responses, create technical documentation, and deliver presentations and demos to technical and non-technical audiences. Collaborate with Sales, Delivery, and Product teams to ensure alignment of proposed solutions with client expectations. Conduct technical workshops, proof of concepts (PoCs), and technical validations. Technical Expertise Deep hands-on knowledge and architecture experience with AWS services : IaaS: EC2, VPC, S3, EBS, ELB, Auto Scaling, etc. PaaS: RDS, Lambda, API Gateway, Fargate, DynamoDB, Aurora, Step Functions. SaaS & Security: AWS Organizations, IAM, AWS WAF, CloudTrail, GuardDuty. Understanding of multi-cloud strategies ; exposure to Azure and GCP cloud services including hybrid architectures is a plus. Strong knowledge of DevOps practices and tools like Terraform, CloudFormation, Jenkins, GitOps, etc. Proficiency in architecting solutions that meet scalability , availability , and security requirements. Data Platform & AI/ML Experience in designing data lakes , data pipelines , and analytics platforms on AWS. Hands-on expertise in Amazon Redshift , Athena , Glue , EMR , Kinesis , and S3-based architectures . Familiarity with AI/ML solutions using SageMaker , AWS Comprehend , or other ML frameworks. Understanding of data governance , data cataloging , and security best practices for analytics workloads. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 10+ years of experience in IT, with 5+ years in cloud architecture and pre-sales roles. AWS Certified Solutions Architect – Professional (or equivalent certification) is preferred. Strong presentation skills and experience interacting with CXOs, Architects, and DevOps teams. Ability to translate technical concepts into business value propositions. Excellent communication, proposal writing, and stakeholder management skills. Nice To Have Experience with Azure (e.g., Synapse, AKS, Azure ML) or GCP (e.g., BigQuery, Vertex AI) . Familiarity with industry-specific solutions (e.g., fintech, healthcare, retail cloud transformations). Exposure to AI/ML MLOps pipelines and orchestration tools like Kubeflow , MLflow , or Airflow . Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

About Beco Beco ( letsbeco.com ) is a fast-growing Mumbai-based consumer-goods company on a mission to replace everyday single-use plastics with planet-friendly, bamboo- and plant-based alternatives. From reusable kitchen towels to biodegradable garbage bags, we make sustainable living convenient, affordable and mainstream. Our founding story began with a Mumbai beach clean-up that opened our eyes to the decades-long life of a single plastic wrapper—sparking our commitment to “Be Eco” every day. Our mission: “To craft, support and drive positive change with sustainable & eco-friendly alternatives—one Beco product at a time.” Backed by marquee climate-focused VCs and now 50 + employees, we are scaling rapidly across India’s top marketplaces, retail chains and D2C channels. Why we’re hiring Sustainability at scale demands operational excellence. As volumes explode, we need data-driven, self-learning systems that eliminate manual grunt work, unlock efficiency and delight customers. You will be the first dedicated AI/ML Engineer at Beco—owning the end-to-end automation roadmap across Finance, Marketing, Operations, Supply Chain and Sales. Responsibilities Partner with functional leaders to translate business pain-points into AI/ML solutions and automation opportunities. Own the complete lifecycle: data discovery, cleaning, feature engineering, model selection, training, evaluation, deployment and monitoring. Build robust data pipelines (SQL/BigQuery, Spark) and APIs to integrate models with ERP, CRM and marketing automation stacks. Stand up CI/CD + MLOps (Docker, Kubernetes, Airflow, MLflow, Vertex AI/SageMaker) for repeatable training and one-click releases. Establish data-quality, drift-detection and responsible-AI practices (bias, transparency, privacy). Mentor analysts & engineers; evangelise a culture of experimentation and “fail-fast” learning—core to Beco’s GSD (“Get Sh#!t Done”) values. Must-have Qualifications 3 + years hands-on experience delivering ML, data-science or intelligent-automation projects in production. Proficiency in Python (pandas, scikit-learn, PyTorch/TensorFlow) and SQL; solid grasp of statistics, experimentation and feature engineering. Experience building and scaling ETL/data pipelines on cloud (GCP, AWS or Azure). Familiarity with modern Gen-AI & NLP stacks (OpenAI, Hugging Face, RAG, vector databases). Track record of collaborating with cross-functional stakeholders and shipping iteratively in an agile environment. Nice-to-haves Exposure to e-commerce or FMCG supply-chain data. Knowledge of finance workflows (Reconciliation, AR/AP, FP&A) or RevOps tooling (HubSpot, Salesforce). Experience with vision models (Detectron2, YOLO) and edge deployment. Contributions to open-source ML projects or published papers/blogs. What Success Looks Like After 1 Year 70 % reduction in manual reporting hours across finance and ops. Forecast accuracy > 85 % at SKU level, slashing stock-outs by 30 %. AI chatbot resolves 60 % of tickets end-to-end, with CSAT > 4.7/5. At least two new data-products launched that directly boost topline or margin. Life at Beco Purpose-driven team obsessed with measurable climate impact. Entrepreneurial, accountable, bold” culture—where winning minds precede outside victories. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... Join Verizon as we continue to grow our industry-leading network to improve the ways people, businesses, and things connect. We are looking for an experienced, talented and motivated AI&ML Engineer to lead AI Industrialization for Verizon. You will also serve as a subject matter expert regarding the latest industry knowledge to improve the Home Product and solutions and/or processes related to Machine Learning, Deep Learning, Responsible AI, Gen AI, Natural Language Processing, Computer Vision and other AI practices. Deploying machine learning models - On Prem, Cloud and Kubernetes environments Driving data-derived insights across the business domain by developing advanced statistical models, machine learning algorithms and computational algorithms based on business initiatives. Creating and implementing data and ML pipelines for model inference, both in real-time and in batches. Architecting, designing, and implementing large-scale AI/ML systems in a production environment. Monitor the performance of data pipelines and make improvements as necessary What We’re Looking For... You have strong analytical skills and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end-to-end analytical solutions, and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and can interact with various partners and multi-functional teams to implement data science-driven business solutions. You'll Need To Have Bachelor's degree with four or more years of relevant work experience. Expertise in advanced analytics/ predictive modelling in a consulting role. Experience with all phases of end-to-end Analytics project Hands-on programming expertise in Python (with libraries like NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch) , R (for specific data analysis tasks) Knowledge of Machine Learning Algorithms - Linear Regression , Logistic Regression ,Decision Trees ,Random Forests ,Support Vector Machines (SVMs) ,Neural Networks (Deep Learning) ,Bayesian Networks Data Engineering - Data Cleaning and Preprocessing ,Feature Engineering ,Data Transformation , Data Visualization Cloud Platforms - AWS SageMaker ,Azure Machine Learning ,Cloud AI Platform Even better if you have one or more of the following: Advanced degree in Computer Science, Data Science, Machine Learning, or a related field. Knowledge on Home domain with key areas like Smart Home, Digital security and wellbeing Experience with stream-processing systems: Spark-Streaming, Storm etc. #TPDNONCDIOREF Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 3 days ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Hyderābād

On-site

Senior Data Scientist – Enterprise Analytics Want to be part of the Data & Analytics organization, whose strategic goal is to create a world-class Data & Analytics company by building, embedding, and maturing a data-driven culture across Thomson Reuters. We are looking for a highly motivated individual with strong organizational and technical skills for the position of Senior Data Scientist. You will play a critical role working on cutting edge of analytics, leveraging predictive models, machine learning and generative AI to drive business insights and facilitating informed decision-making and help Thomson Reuters rapidly scale data-driven initiatives. About the Role In this opportunity as Senior Data Scientist, you will: Engage with stakeholders, business analysts and project team to understand the data requirements. Work in multiple business domain areas including Customer Service, Finance, Sales and Marketing. Design analytical frameworks to provide insights into a business problem. Explore and visualize multiple data sets to understand data available and prepare data for problem solving. Build machine learning models and/or statistical solutions. Build predictive models, generative AI solutions. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI. Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. About You You're a fit for the role of Senior Data Scientist if your background includes: Experience- 6-8 Years in the field of Machine Learning & AI Must have a minimum of 3 years of experience working in the data science domain Degree preferred in a quantitative field (Computer Science, Statistics, etc.) Both technical and business acumen is required Technical skills Proficient in machine learning, statistical modelling, data science and generative AI techniques Highly proficient in Python and SQL Experience with Tableau and/or PowerBI Has worked with Amazon Web Services and Sagemaker Ability to build data pipelines for data movement using tools such as Alteryx, GLUE Experience Predictive analytics for customer retention, upsell/cross sell products and new customer acquisition, Customer Segmentation, Recommendation engines (customer and AWS Personalize), POC’s in building Generative AI solutions (GPT, Llama etc.,) Hands on with Prompt Engineering Experience in Customer Service, Finance, Sales and Marketing Additional Technical skills include Familiarity with Natural Language Processing including Feature Extraction techniques, Word Embeddings, Topic Modeling, Sentiment Analysis, Classification, Sequence Models and Transfer Learning Knowledgeable of AWS APIs for Machine Learning Has worked with Snowflake extensively. Good presentation skills and the ability to tell stories using data and Powerpoint/Dashboard Visualizations. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Consulting Experience with a premier consulting firm. #LI-SS5 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here . Learn more on how to protect yourself from fraudulent job postings here . More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 3 days ago

Apply

3.0 years

6 - 8 Lacs

Hyderābād

On-site

The Data Scientist organization within the Data and Analytics division is responsible for designing and implementing a unified data strategy that enables the efficient, secure, and governed use of data across the organization. We aim to create a trusted and customer-centric data ecosystem, built on a foundation of data quality, security, and openness, and guided by the Thomson Reuters Trust Principles. Our team is dedicated to developing innovative data solutions that drive business value while upholding the highest standards of data management and ethics. About the role: Work with low to minimum supervision to solve business problems using data and analytics. Work in multiple business domain areas including Customer Experience and Service, Operations, Finance, Sales and Marketing. Work with various business stakeholders, to understand and document requirements. Design an analytical framework to provide insights into a business problem. Explore and visualize multiple data sets to understand data available for problem solving. Build end to end data pipelines to handle and process data at scale. Build machine learning models and/or statistical solutions. Build predictive models. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. Work collaboratively with other team members. About you: Overall 3+ years' experience in technology roles. Must have a minimum of 1 years of experience working in the data science domain. Has used frameworks/libraries such as Scikit-learn, PyTorch, Keras, NLTK. Highly proficient in Python. Highly proficient in SQL. Experience with Tableau and/or PowerBI. Has worked with Amazon Web Services and Sagemaker. Ability to build data pipelines for data movement using tools such as Alteryx, GLUE, Informatica. Proficient in machine learning, statistical modelling, and data science techniques. Experience with one or more of the following types of business analytics applications: Predictive analytics for customer retention, cross sales and new customer acquisition. Pricing optimization models. Segmentation. Recommendation engines. Experience in one or more of the following business domains Customer Experience and Service. Finance. Operations. Good presentation skills and the ability to tell stories using data and PowerPoint/Dashboard Visualizations. Excellent organizational, analytical and problem-solving skills. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Ability to excel in a fast-paced, startup-like environment. #LI-SS5 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here . Learn more on how to protect yourself from fraudulent job postings here . More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 3 days ago

Apply

1.0 - 2.0 years

2 - 8 Lacs

Hyderābād

On-site

Job Title: AI/ML Associate Engineer Job Type: Full-Time. Immediate Joiners Only! Location: Hyderabad Desired Experience: 1-2 years of experience in AI/ML, OR, Trained freshers (with NO experience) required to have demonstrable hands-on project exposure in AI/ML. Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Certifications in AI/ML or Data Science are a strong plus. Job Overview We are seeking a highly motivated and technically skilled Junior AI and ML Engineer/Developer to join our team in a SaaS product-based company. The role is ideal for freshers trained in AI/ML with hands-on project experience or professionals with 1-2 years of experience. The candidate will contribute to developing, implementing, and optimizing AI/ML solutions to enhance the intelligence and functionality of SaaS products. Key Responsibilities Develop and implement machine learning models, including supervised, unsupervised, and deep learning techniques. Build scalable AI/ML pipelines for tasks such as natural language processing (NLP), computer vision, recommendation systems, and predictive analytics. Work with programming languages like Python or R , leveraging AI/ML libraries such as TensorFlow, PyTorch, Keras, and Scikit-learn . Pre-process and analyse datasets using techniques like feature engineering, scaling, and data augmentation. Deploy AI/ML models on cloud platforms (e.g., AWS SageMaker, Google AI Platform, Azure ML ) and ensure optimal performance. Manage datasets using SQL and NoSQL databases, applying efficient data handling and querying techniques. Utilize version control tools like Git to maintain code integrity and collaboration. Collaborate with product and development teams to align AI/ML solutions with business objectives. Document technical workflows, algorithms, and experiment results for reproducibility. Stay up to date with the latest advancements in AI/ML to propose and implement innovative solutions. Key Skills Proficiency in Python or R , with experience in AI/ML frameworks like TensorFlow, PyTorch, Scikit-learn, and Keras . Strong understanding of machine learning algorithms, NLP , and computer vision . Experience with data pre-processing techniques such as feature scaling, normalization, and handling missing data. Familiarity with cloud platforms for AI/ML deployment ( AWS, Google Cloud, Azure ). Database management skills in SQL and familiarity with NoSQL databases like MongoDB. Knowledge of version control systems like Git . Exposure to tools like Docker or Kubernetes for deploying AI/ML models in production. Strong foundation in mathematics and statistics, including linear algebra, probability, and optimization techniques. Excellent analytical and problem-solving skills, with a detail-oriented mindset. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. 1-2 years of experience in AI/ML, OR, Trained freshers required to have demonstrable hands-on project exposure in AI/ML. Certifications in AI/ML or Data Science are a strong plus. Knowledge or experience in a SaaS product-based environment is an advantage. What We Offer : Opportunity to work on cutting-edge AI/ML projects in a fast-paced SaaS environment. Collaborative and innovative workplace with mentorship and career growth opportunities. Competitive salary and benefits package.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

Linkedin logo

AI/ML Expert – PHP Integration (Remote / India Preferred) Experience: 2–5 years in AI/ML with PHP integration About Us: We’re the team behind Wiser – AI-Powered Product Recommendations for Shopify , helping over 5,000+ merchants increase AOV and conversions through personalized upsell and cross-sell experiences. We’re now scaling our recommendation engine further and are looking for an AI/ML expert who can help us take Wiser to the next level with smarter, faster, and more contextual product recommendations. Role Overview: As an AI/ML Engineer, you will: Develop and optimize product recommendation algorithms based on customer behavior, sales data, and store context. Train models using behavioral and transactional data across multiple Shopify stores. Build and test ML pipelines that can scale across thousands of stores. Integrate AI outputs into our PHP-based system (Laravel/Symfony preferred). Work closely with product and backend teams to improve real-time recommendations, ranking logic, and personalization scores. Responsibilities: Analyze large datasets from Shopify stores (products, orders, sessions) Build models for: Product similarity User-based & item-based collaborative filtering Popularity-based + contextual hybrid models Improve existing recommendation logic (e.g., Frequently Bought Together, Complete the Look) Implement real-time or near real-time prediction logic Ensure AI output integrates smoothly into PHP backend APIs Document logic and performance of models for internal review Requirements: 2–5 years of experience in machine learning, AI, or data science Strong Python skills (scikit-learn, TensorFlow, PyTorch, Pandas, NumPy) Experience building recommendation systems or working with eCommerce data Experience integrating AI models with PHP/Laravel applications Familiarity with Shopify ecosystem and personalization is a bonus Ability to explain ML logic to non-technical teams Bonus: Experience with AWS, S3, SageMaker, or model hosting APIs What You’ll Get: Opportunity to shape AI in one of the fastest-growing Shopify apps Work on a product used by 4,500+ stores globally Direct collaboration with founders & product team Competitive pay + growth opportunities Show more Show less

Posted 3 days ago

Apply

15.0 - 25.0 years

8 - 10 Lacs

Thiruvananthapuram

On-site

15 - 25 Years 1 Opening Trivandrum Role description Job Title: AI Architect Location: Kochi/Trivandrum Experience: 8-15 Years About the Role: We are seeking a highly experienced and visionary AI Architect to lead the design and implementation of cutting-edge AI solutions that will drive our company's AI transformation. This role is critical in bridging the gap between business needs, technical feasibility, and the successful deployment of AI initiatives across our product and delivery organizations. Key Responsibilities: AI Strategy & Roadmap: Define and drive the AI architectural strategy and roadmap, ensuring alignment with overall business objectives and the company's AI transformation goals. Solution Design & Architecture: Lead the end-to-end architectural design of complex AI/ML solutions, including data pipelines, model training, deployment, and monitoring. Technology Evaluation & Selection: Evaluate and recommend appropriate AI technologies, platforms, and tools (e.g., machine learning frameworks, cloud AI services, MLOps platforms) to support scalable and robust AI solutions. Collaboration & Leadership: Partner closely with product teams, delivery organizations, and data scientists to translate business requirements into technical specifications and architectural designs. Provide technical leadership and guidance to development teams. Best Practices & Governance: Establish and enforce AI development best practices, coding standards, and governance policies to ensure high-quality, secure, and compliant AI solutions. Scalability & Performance: Design AI solutions with scalability, performance, and reliability in mind, anticipating future growth and evolving business needs. Innovation & Research: Stay abreast of the latest advancements in AI, machine learning, and related technologies, identifying opportunities for innovation and competitive advantage. Mentorship & Upskilling: Mentor and upskill internal teams on AI architectural patterns, emerging technologies, and best practices. Key Requirements: 8-15 years of experience in architecting and implementing complex AI/ML solutions, with a strong focus on enterprise-grade systems. Deep understanding of machine learning algorithms, deep learning architectures, and natural language processing (NLP) techniques. Proven experience with major cloud AI platforms (e.g., AWS SageMaker, Azure ML, Google AI Platform) and MLOps principles. Strong proficiency in programming languages commonly used in AI development (e.g., Python, Java). Experience with big data technologies (e.g., Spark, Hadoop) and data warehousing solutions. Demonstrated ability to lead cross-functional technical teams and drive successful project outcomes. Excellent communication, presentation, and stakeholder management skills, with the ability to articulate complex technical concepts to diverse audiences. Bachelor’s or master’s degree in computer science, AI, Machine Learning, or a related quantitative field. Good to Have: Prior experience in an AI leadership or principal architect role within an IT services or product development company. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes) for deploying AI models. Experience with responsible AI principles, ethics, and bias mitigation strategies. Contributions to open-source AI projects or relevant publications. Key Skills: AI Architecture, Machine Learning, Deep Learning, NLP, Cloud AI Platforms, MLOps, Data Engineering, Python, Solution Design, Technical Leadership, Scalability, Performance Optimization. Skills Data Science,Artificial Intelligence,Data Engineering About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 3 days ago

Apply

0 years

0 Lacs

Gurgaon

On-site

Experience in AWS SageMaker development, pipelines, real-time and batch transform jobs Expertise in AWS, Terraform / Cloud formation for IAC Experience in AWS networking concepts Experience in coding skills python, TensorFlow, pytorch or scikit-learn. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Software Engineer – Backend SOL00054 Job Type: Full Time Location: Hyderabad, Telangana Experience Required: 5-7 Years CTC : 13 - 17LPA Job Description : Our client, headquartered in the USA with offices globally is looking for a Backend Software Engineer to join our team responsible for building the core backend infrastructure for our MLOps platform on AWS . The systems you help build will enable feature engineering, model deployment, and model inference at scale – in both batch and online modes. You will collaborate with a distributed cross-functional team to design and build scalable, reliable systems for machine learning workflows. Key Responsibilities: Design, develop, and maintain backend components of the MLOps platform hosted on AWS . Build and enhance RESTful APIs and microservices using Python frameworks like Flask , Django , or FastAPI . Work with WSGI/ASGI web servers such as Gunicorn and Uvicorn . Implement scalable and performant solutions using concurrent programming (AsyncIO) . Develop automated unit and functional tests to ensure code reliability. Collaborate with DevOps engineers to integrate CI/CD pipelines and ensure smooth deployments. Participate in on-call rotation to support production issues and ensure high system availability. Mandatory Skills: · Strong backend development experience using Python with Flask , Django , or FastAPI . · Experience working with WSGI/ASGI web servers (e.g., Gunicorn, Uvicorn). · Hands-on experience with AsyncIO or other asynchronous programming models in Python. · Proficiency with unit and functional testing frameworks . · Experience working with AWS (or at least one public cloud platform). · Familiarity with CI/CD practices and tooling. Nice to have Skills: · Experience developing Kafka client applications in Python. · Familiarity with MLOps platforms like AWS SageMaker , Kubeflow , or MLflow . · Exposure to Apache Spark or similar big data processing frameworks. · Experience with Docker and container platforms such as AWS ECS or EKS . · Familiarity with Terraform , Jenkins , or other DevOps/IaC tools. · Knowledge of Python packaging (Wheel, PEX, Conda). · Experience with metaprogramming in Python. · Education: · Bachelor’s degree in Computer Science, Engineering, or a related field. Show more Show less

Posted 3 days ago

Apply

8.0 - 15.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Job Title: AI Architect Location: Kochi/Trivandrum Experience 8-15 Years About the Role: We are seeking a highly experienced and visionary AI Architect to lead the design and implementation of cutting-edge AI solutions that will drive our company's AI transformation. This role is critical in bridging the gap between business needs, technical feasibility, and the successful deployment of AI initiatives across our product and delivery organizations. Key Responsibilities AI Strategy & Roadmap: Define and drive the AI architectural strategy and roadmap, ensuring alignment with overall business objectives and the company's AI transformation goals. Solution Design & Architecture Lead the end-to-end architectural design of complex AI/ML solutions, including data pipelines, model training, deployment, and monitoring. Technology Evaluation & Selection Evaluate and recommend appropriate AI technologies, platforms, and tools (e.g., machine learning frameworks, cloud AI services, MLOps platforms) to support scalable and robust AI solutions. Collaboration & Leadership Partner closely with product teams, delivery organizations, and data scientists to translate business requirements into technical specifications and architectural designs. Provide technical leadership and guidance to development teams. Best Practices & Governance Establish and enforce AI development best practices, coding standards, and governance policies to ensure high-quality, secure, and compliant AI solutions. Scalability & Performance Design AI solutions with scalability, performance, and reliability in mind, anticipating future growth and evolving business needs. Innovation & Research Stay abreast of the latest advancements in AI, machine learning, and related technologies, identifying opportunities for innovation and competitive advantage. Mentorship & Upskilling Mentor and upskill internal teams on AI architectural patterns, emerging technologies, and best practices. Key Requirements 8-15 years of experience in architecting and implementing complex AI/ML solutions, with a strong focus on enterprise-grade systems. Deep understanding of machine learning algorithms, deep learning architectures, and natural language processing (NLP) techniques. Proven experience with major cloud AI platforms (e.g., AWS SageMaker, Azure ML, Google AI Platform) and MLOps principles. Strong proficiency in programming languages commonly used in AI development (e.g., Python, Java). Experience with big data technologies (e.g., Spark, Hadoop) and data warehousing solutions. Demonstrated ability to lead cross-functional technical teams and drive successful project outcomes. Excellent communication, presentation, and stakeholder management skills, with the ability to articulate complex technical concepts to diverse audiences. Bachelor’s or master’s degree in computer science, AI, Machine Learning, or a related quantitative field. Good To Have Prior experience in an AI leadership or principal architect role within an IT services or product development company. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes) for deploying AI models. Experience with responsible AI principles, ethics, and bias mitigation strategies. Contributions to open-source AI projects or relevant publications. Key Skills: AI Architecture, Machine Learning, Deep Learning, NLP, Cloud AI Platforms, MLOps, Data Engineering, Python, Solution Design, Technical Leadership, Scalability, Performance Optimization. Skills Data Science,Artificial Intelligence,Data Engineering Show more Show less

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies