Jobs
Interviews

1558 Sagemaker Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

15 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

JD Any 4 Must Have : AWS RDS, Redshift, Glue, Airflow, Python Note - Try to look for any 4 skills from must have, if finding challenge then only Redshift & AWS RDS is negotiable and rest will be must have Good To Have: General AWS knowledge, SageMaker,QuickSight, Talend, & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Join our Team Ericsson Enterprise Wireless Solutions (BEWS) is responsible for driving Ericsson’s Enterprise Networking and Security business. Our expanding product portfolio covers wide area networks, local area networks, and enterprise security. We are the #1 global market leader in Wireless-WAN enterprise connectivity and are rapidly growing in enterprise Private 5G networks and Secure Access Services Edge (SASE) solutions . Key Responsibilities Define and implement model validation processes and business success criteria in data science terms. Contribute to the architecture and data flow for machine learning models. Rapidly develop and iterate minimum viable solutions (MVS) that address enterprise needs. Conduct advanced data analysis and rigorous testing to enhance model accuracy and performance. Work with Data Architects to leverage existing data models and create new ones as required. Collaborate with product teams and business partners to industrialize machine learning models into Ericsson’s enterprise solutions. Build MLOps pipelines for continuous integration, continuous delivery, validation, and monitoring of AI/ML models. Design and implement effective big data storage and retrieval strategies (indexing, partitioning, etc.). Develop and maintain APIs for AI/ML models and optimize data pipelines. Lead end-to-end ML projects from conception to deployment. Stay updated on the latest ML advancements and apply best practices to enterprise AI solutions. Required Skills & Experience 4–6 years of hands-on experience in machine learning, AI, and data science. Strong knowledge of ML frameworks (Keras, TensorFlow, Spark ML, etc.). Proficiency in ML algorithms, deep learning, reinforcement learning (RL), and large language models (LLMs). Expertise in MLOps, including model lifecycle management and monitoring. Experience with containerization & orchestration (Docker, Kubernetes, Helm charts). Hands-on expertise with workflow orchestration tools (Kubeflow, Airflow, Argo). Strong programming skills in Python and experience with C++, Scala, Java, R. Experience in API design & development for AI/ML models. Hands-on knowledge of Terraform for infrastructure automation. Familiarity with AWS services (Data Lake, Athena, SageMaker, OpenSearch, DynamoDB, Redshift). Strong understanding of self-hosted deployment of LLMs on AWS. Experience in RASA, LangChain, LangGraph, LlamaIndex, Django, Open Policy Agent. Working knowledge of vector databases, knowledge graphs, retrieval-augmented generation (RAG), agents, and agentic mesh architectures. Expertise in monitoring tools like Datadog for K8S environments. Ability to document, present, and communicate technical findings to business stakeholders. Proven ability to contribute to ML forums, patents, and research publications. Educational Qualifications B.Tech/B.E. in Computer Science, MCA, or a Master’s in Mathematics/Statistics from a top-tier institute. Join Ericsson and be part of a cutting-edge team that is revolutionizing enterprise AI, 5G, and security solutions . Apply now to shape the future of wireless connectivity! Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 763721

Posted 1 month ago

Apply

5.0 - 10.0 years

2 - 10 Lacs

Calcutta

On-site

Join our Team About this opportunity: We are seeking a highly skilled and motivated AI Engineer to join our growing AI/ML team. The ideal candidate will be responsible for developing, deploying, and scaling machine learning and GenAI (LLM-based) solutions across business domains. You will work closely with data scientists, data engineers, and business stakeholders to turn innovative ideas into impactful solutions using state-of-the-art cloud technologies and MLOps practices. What you will do: Develop, Train, Test, and Deploy Machine Learning and GenAI-LLM models. Collect, Clean, and preprocess large-scale datasets for AI/ML training and evaluation. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Design and implement scalable AI services and pipelines using Python and Cloud technologies (e.g., Azure, AWS).Continuously improve model performance through tuning, optimization, and retraining. Knowledge of MLOps practices to use IaC to deploy models into production and industrialising the business solution The skills you bring: Strong expertise in AWS services: Glue, SageMaker, Lambda, CloudWatch, S3, IAM, etc. Solid programming skills in Python and experience with PySpark for large-scale data processing. Experience with DevOps/MLOps tools such as Azure DevOps, GitHub Actions. Experience & Education: Bachelor’s or Master’s in Computer Science, Data Science, AI, or related field. 5–10 years of experience in ML/AI model development and deployment. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 769126

Posted 1 month ago

Apply

6.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Agentic AI Engineer | 6+ Years | Remote | Work Timings: 2:00 PM to 11PM Job Description: AI/ML Model Development & Deployment – Focused on marketing and operations use cases Deep Learning – CNNs, RNNs, Transformers, Attention Mechanisms Generative AI – Experience with OpenAI (GPT, DALL·E, Whisper) and Anthropic (Claude) Agentic AI Platforms – AutoGen, CrewAI, AWS Bedrock Multimodal AI – Building agents using text, voice, and other inputs for automation Python Programming – Proficient with NumPy, Pandas, Matplotlib, TensorFlow, PyTorch Traditional ML Techniques – Supervised/Unsupervised Learning, PCA, Feature Engineering, Model Evaluation, Hyperparameter Tuning Data Analytics – Predictive analysis, clustering, A/B testing, KPI monitoring MLOps & CI/CD – Model versioning, deployment pipelines, monitoring Cloud Services – AWS (S3, Lambda, EC2, SageMaker, Bedrock), Serverless architectures Development Tools – Proficient in using Cursor for design, development, and code reviews Communication & Collaboration – Strong communication skills and client engagement experience Domain Expertise – Marketing-focused AI solutions (big plus)

Posted 1 month ago

Apply

2.0 years

0 Lacs

India

On-site

Location: In-Person | Novel Tech Park, HSR Layout, Bengaluru Type: Full-Time What We’re Building At Sapience1, we’re on a mission to transform how families discover and access youth services from academics and enrichment to life skills and care using behavioral AI, intelligent design, and seamless technology. We’re not building just another app. We’re engineering the future of Human Experience Tech where services feel smart, personal, and human. We’re looking for a builder — someone who thrives in fast-moving environments and loves turning complex challenges into real-world products. Job Description We are seeking a talented ML Model Training Engineer to be a key contributor to the Hello Edison™ AI Engine, the core intelligence of the Sapience1 platform. You will be responsible for designing, developing, and optimizing machine learning models for personalized matchmaking, recommendation systems, and behavioral insights. Your work will directly enhance the platform's ability to create meaningful connections between families and service providers. Responsibilities Design, implement, and optimize machine learning models for key Hello Edison™ functionalities, including: Matchmaking and compatibility scoring (AI Fit Score). Personalized recommendations for Members and Partners. Behavioral pattern detection for performance optimization and churn prediction. Develop data pipelines for collecting, cleaning, and preparing diverse datasets from Member insights, Partner assessments, session feedback, and usage patterns. Conduct feature engineering and selection to improve model accuracy and performance. Train, evaluate, and fine-tune ML models using appropriate frameworks and techniques. Collaborate with Backend Engineers to integrate ML models into production systems via APIs. Monitor and maintain the performance of deployed ML models, identifying areas for continuous improvement. Research and apply state-of-the-art ML techniques to enhance the AI engine's capabilities. Qualifications Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or a related quantitative field. 2+ years of experience in developing and deploying machine learning models. Strong proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Pandas, NumPy). Experience with data processing and ETL pipelines. Understanding of various ML algorithms (e.g., recommendation systems, supervised/unsupervised learning, NLP basics). Familiarity with cloud platforms (e.g., AWS SageMaker, Google AI Platform) for ML model training and deployment. Experience with version control systems (Git). Excellent problem-solving skills and ability to work with large datasets. Compensation Competitive, based on experience. 18,00,000 - 24,00,000 LPA

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

Remote

Job Title: AI/ML Engineer Experience: 4+ Years Location: Remote Job Type: Full-Time Job Summary: We are looking for a passionate and results-driven AI/ML Engineer with 4 years of experience in designing, building, and deploying machine learning models and intelligent systems. The ideal candidate should have solid programming skills, a strong grasp of data preprocessing, model evaluation, and MLOps practices. You will collaborate with cross-functional teams including data scientists, software engineers, and product managers to integrate intelligent features into applications and systems. Key Responsibilities: Design, develop, train, and optimize machine learning and deep learning models for real-world applications. Preprocess, clean, and transform structured and unstructured data for model training and evaluation. Implement, test, and deploy models using APIs or microservices (Flask, FastAPI, etc.) in production environments. Use ML libraries and frameworks like Scikit-learn, TensorFlow, PyTorch, Hugging Face, XGBoost, etc. Monitor and retrain models as needed for performance, accuracy, and drift mitigation. Collaborate with software and data engineering teams to operationalize ML solutions using MLOps tools. Stay updated with emerging trends in AI/ML and suggest enhancements to existing systems. Required Skills and Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, AI/ML, Data Science, or related field. 4+ years of hands-on experience in machine learning model development and deployment. Strong experience in Python and libraries like Pandas, NumPy, Scikit-learn, Matplotlib/Seaborn. Experience with deep learning frameworks such as TensorFlow, PyTorch, or Keras. Proficiency in model deployment using Flask, FastAPI, Docker, and REST APIs. Experience with version control (Git), model versioning, and experiment tracking (MLflow, Weights & Biases). Familiarity with cloud platforms like AWS (SageMaker), Azure ML, or GCP AI Platform. Knowledge of databases (SQL/NoSQL) and data pipelines (Airflow, Spark, etc.). Strong problem-solving and debugging skills, with an analytical mindset.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Profile - Lead, Data Scientist We are looking for a passionate ML /AI Lead Data Scientist with demonstrated thought-leadership & proven ability to lead the development of analytics-focused products and utilizing cutting edge machine learning, natural language processing, and mathematical techniques to drive powerful business results. You will have the opportunity to work with a wide range of stakeholders and functional teams to help them discover insights hidden in data sets and enable stakeholders to improve business. AI ethics, fairness and secure development practices are also our focus areas. Responsibilities Partners with lines of business to understand business problems and translate them into identifiable machine learning problems which can be delivered as technical solutions and actionable recommendations across the organization Coordinate with business teams to monitor outcomes and refine/ improve the machine learning models Work across the spectrum of statistical modelling including supervised, unsupervised, & deep learning techniques to apply the right level of solution to the right problem Collaborate & share best practices with data and software engineers to enable consistent deployment of high-quality models that will scale across the company’s ecosystem Own the code review process to ensure stringent coding guidelines are met by other data scientists & data engineers Mentor & lead Data Science engineers in choosing the right machine learning approach/ models & utilizing open source languages such as R, Python, etc. Lead data mining and collection procedures for all business use cases and guarantee data quality and integrity Utilize Data visualization tools to deliver insights to stakeholders and present technical solutions to non-technical audience in a simple and clear manner Track emerging developments w.r.t technology, tools, practices, etc in the open source, data science/machine learning/chatbot domain and leverage same for Schneider Electric Build frameworks leveraging APIs to industrialize AI models across the organization Adhere to stringent quality assurance and documentation standards using version control and code repositories (e.g., Git, GitHub, Markdown) Requirements and Skills You should have bachelor’s or Master’s degree in Computer Science, Statistics or Mathematics, Informatics, Information Systems or another quantitative field You should have experience in solving real life complex business problems using machine learning. Hands-on experience in deploying these machine learning solutions to production is mandatory. In depth understanding of NLP, LLM, Generative AI, RAG application development. In-depth understanding and modeling experience in supervised, unsupervised, reinforcement and deep learning models; hands-on knowledge of data wrangling, data cleaning/ preparation, dimensionality reduction is required Knowledge of vector algebra, statistical and probabilistic modelling is highly desirable Exploratory data analysis and hypothesis testing to identify ML opportunities is a plus Experience in major machine learning frameworks such as Pytorch, Scikit-Learn, Tensorflow, Pandas, SparkML etc. Fluency in programming skills such as Python, R, or other equivalent languages Familiarity with databases like MySQL, Oracle, SQL Server, NoSQL, etc. is desirable Experience working with Amazon SageMaker or Azure ML Studio for deployments is required Experience in data visualization software such as Tableau, ELK, etc is a plus Strong analytical and critical thinking skills. You should also have a business mindset, swift to identify risk situations and opportunities, and able to generate creative solutions to business problems Effective communication skills (written and verbal) to properly articulate complicated statistical models/ reports to management and other IT development partners About Us Schneider Electric™ creates connected technologies that reshape industries, transform cities and enrich lives. Our 144,000 employees thrive in more than 100 countries. From the simplest of switches to complex operational systems, our technology, software and services improve the way our customers manage and automate their operations. Help us deliver solutions that ensure Life Is On everywhere, for everyone and at every moment: Great people make Schneider Electric a great company. We seek out and reward people for putting the customer first, being disruptive to the status quo, embracing different perspectives, continuously learning, and acting like owners. We want our employees to reflect the diversity of the communities in which we operate. We welcome people as they are, creating an inclusive culture where all forms of diversity are seen as a real value for the company. We’re looking for people with a passion for success — on the job and beyond. Qualifications Responsibilities Partners with lines of business to understand business problems and translate them into identifiable machine learning problems which can be delivered as technical solutions and actionable recommendations across the organization Coordinate with business teams to monitor outcomes and refine/ improve the machine learning models Work across the spectrum of statistical modelling including supervised, unsupervised, & deep learning techniques to apply the right level of solution to the right problem Collaborate & share best practices with data and software engineers to enable consistent deployment of high-quality models that will scale across the company’s ecosystem Own the code review process to ensure stringent coding guidelines are met by other data scientists & data engineers Mentor & lead Data Science engineers in choosing the right machine learning approach/ models & utilizing open source languages such as R, Python, etc. Lead data mining and collection procedures for all business use cases and guarantee data quality and integrity Utilize Data visualization tools to deliver insights to stakeholders and present technical solutions to non-technical audience in a simple and clear manner Track emerging developments w.r.t technology, tools, practices, etc in the open source, data science/machine learning/chatbot domain and leverage same for Schneider Electric Build frameworks leveraging APIs to industrialize AI models across the organization Adhere to stringent quality assurance and documentation standards using version control and code repositories (e.g., Git, GitHub, Markdown) Requirements and Skills You should have bachelor’s or Master’s degree in Computer Science, Statistics or Mathematics, Informatics, Information Systems or another quantitative field You should have 7+ years of experience in solving real life complex business problems using machine learning. Hands-on experience in deploying these machine learning solutions to production is mandatory. In depth understanding of NLP, LLM, Generative AI, RAG application development. In-depth understanding and modeling experience in supervised, unsupervised, reinforcement and deep learning models; hands-on knowledge of data wrangling, data cleaning/ preparation, dimensionality reduction is required Knowledge of vector algebra, statistical and probabilistic modelling is highly desirable Exploratory data analysis and hypothesis testing to identify ML opportunities is a plus Experience in major machine learning frameworks such as Pytorch, Scikit-Learn, Tensorflow, Pandas, SparkML etc. Fluency in programming skills such as Python, R, or other equivalent languages Familiarity with databases like MySQL, Oracle, SQL Server, NoSQL, etc. is desirable Experience working with Amazon SageMaker or Azure ML Studio for deployments is required Experience in data visualization software such as Tableau, ELK, etc is a plus Strong analytical and critical thinking skills. You should also have a business mindset, swift to identify risk situations and opportunities, and able to generate creative solutions to business problems Effective communication skills (written and verbal) to properly articulate complicated statistical models/ reports to management and other IT development partners Primary Location : IN-Karnataka-Bangalore Schedule : Full-time Unposting Date : Ongoing

Posted 1 month ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Location Chennai, Tamil Nadu, India Job ID R-230200 Date posted 26/06/2025 Job Title: Senior Consultant - AI Engineer Introduction to role: Are you ready to redefine an industry and change lives? As a Senior Consultant - AI Engineer, you'll develop and deploy key AI products that generate business and scientific insights through advanced data science techniques. Dive into building models using both foundational and innovative methods, processing structured and unstructured data, and collaborating closely with internal partners to solve complex problems in drug development, manufacturing, and supply chain. This is your chance to make a direct impact on patients by transforming our ability to develop life-changing medicines! About the Operations IT team Operations IT is a global capability supporting the Global Operations organization across Pharmaceutical Technology Development, Manufacturing & Global Engineering, Quality Control, Sustainability, Supply Chain, Logistics, and Global External Sourcing and Procurement. We operate from key hubs in the UK, Sweden, the US, China, and our Global Technology Centers in India and Mexico. Our work directly impacts patients by transforming our ability to develop life-changing medicines, combining pioneering science with leading digital technology platforms and data. Accountabilities: Drive the implementation of advanced modelling algorithms (e.g., classification, regression, clustering, NLP, image analysis, graph theory, generative AI) to generate actionable business insights. Mentor AI scientists, plan and supervise technical work, and collaborate with stakeholders. Work within an agile framework and in multi-functional teams to align AI solutions with business goals. Engage internal stakeholders and external partners for the successful delivery of AI solutions. Continuously monitor and optimize AI models to improve accuracy and efficiency (scalable, reliable, and well-maintained). Document processes, models, and key learnings & contribute to building internal AI capabilities. Ensure AI models adhere to ethical standards, privacy regulations, and fairness guidelines. Essential Skills/Experience: Bachelor's in operations research, mathematics, computer science, or related quantitative field. Advanced expertise in Python and familiarity with database systems (e.g. SQL, NoSQL, Graph). Proven proficiency in at least 3 of the following domains: Generative AI: advanced expertise working with: LLMs/transformer models, AWS Bedrock, SageMaker, LangChain Computer Vision: image classification and object detection MLOps: putting models into production in the AWS ecosystem Optimization: production scheduling, planning Traditional ML: time series analysis, unsupervised anomaly detection, analysis of high dimensional data Proficiency in ML libraries sklearn, pandas, TensorFlow/PyTorch Experience productionizing ML/ Gen AI services and working with complex datasets. Strong understanding of software development, algorithms, optimization, and scaling. Excellent communication and business analysis skills. Desirable Skills/Experience: Master’s or PhD in a relevant quantitative field. Cloud engineering experience (AWS cloud services) Snowflake Software development experience (e.g. React JS, Node JS) When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. Our global organization is purpose-led, ensuring that we can fulfill our mission to push the boundaries of science and discover life-changing medicines. We take pride in working close to the cause, opening the locks to save lives, ultimately making a massive difference to the outside world. Here you'll find a dynamic environment where innovation thrives and diverse minds work inclusively together. Ready to make a meaningful impact? Apply now and be part of our journey to revolutionize healthcare! Date Posted 27-Jun-2025 Closing Date 09-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title : AI/ML Engineer Location : Bengaluru, India Experience : 6 months - 2 years Company Overview IAI Solutions operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains. Position Summary We are looking for an AI/ML Engineer with a strong background in Python, Flask, FastAPI and Object-Oriented Programming (OOP). The ideal candidate should have significant experience in prompt engineering, open source model fine-tuning, and using the HuggingFace libraries. Additionally, expertise in working with cloud platforms such as AWS SageMaker or similar services for training AI models is essential. Priority will be given to candidates with a research background, particularly those who have successfully fine-tuned and deployed AI models in real-world applications. Key Responsibilities Develop, fine-tune, and deploy AI models using Python and Django frameworks. Apply prompt engineering techniques to optimize model outputs and improve accuracy. Utilize HuggingFace libraries and other ML tools to build and fine-tune state-of-the-art models. Work on cloud platforms like AWS SageMaker or equivalent to train and deploy AI models efficiently. Collaborate with research teams to translate cutting-edge AI research into scalable solutions. Implement object-oriented programming (OOP) principles and problem-solving strategies in developing AI solutions. Stay updated with the latest advancements in AI/ML and integrate new techniques into ongoing projects. Document and share findings, best practices, and solutions across the engineering team. An Ideal Candidate Will Have Strong proficiency in Python and Flask/FastAPI. Experience in prompt engineering and fine-tuning AI models. Extensive experience with HuggingFace libraries and similar AI/ML tools. Strong experience in AI Agentic Architecture Hands-on experience with cloud platforms such as AWS SageMaker for training and deploying models. Proficiency in Databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch Hands-on experience with Docker and Git for version control. Background in AI/ML research, with a preference for candidates from research institutes. Demonstrated experience in training and deploying machine learning models in real-world applications. Solid understanding of object-oriented programming and problem-solving skills. Strong analytical skills and the ability to work independently or in a team environment. Excellent communication skills, with the ability to present complex technical concepts to non-technical stakeholders. Must Have Skills Python Object-Oriented Programming (OOP) Prompt engineering HuggingFace libraries and similar AI/ML tools Open Source Model fine-tuning AI Agentic Architecture such as LangGraph and CrewA Docker and Git for version control. Databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch Good To Have Deep Learning and Machine Learning AWS SageMaker or similar services for training AI models Previous experience in academic or industrial research, with published work in AI/ML. Proven track record of successful AI model deployments and optimizations. Experience with databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch. Perks & Benefits Work on groundbreaking AI/ML projects in a collaborative and innovative environment. Access to state-of-the-art tools and cloud platforms. Opportunities for professional development and continuous learning. Competitive salary. (ref:hirist.tech)

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What Youll Be Doing... Join Verizon as we continue to grow our industry-leading network to improve the ways people, businesses, and things connect. We are looking for an experienced, talented and motivated AI&ML Engineer to lead AI Industrialization for Verizon. You will also serve as a subject matter expert regarding the latest industry knowledge to improve the Home Product and solutions and/or processes related to Machine Learning, Deep Learning, Responsible AI, Gen AI, Natural Language Processing, Computer Vision and other AI practices. Deploying machine learning models - On Prem, Cloud and Kubernetes environments Driving data-derived insights across the business domain by developing advanced statistical models, machine learning algorithms and computational algorithms based on business initiatives. Creating and implementing data and ML pipelines for model inference, both in real-time and in batches. Architecting, designing, and implementing large-scale AI/ML systems in a production environment. Monitor the performance of data pipelines and make improvements as necessary What were looking for... You have strong analytical skills and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end-to-end analytical solutions, and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and can interact with various partners and multi-functional teams to implement data science-driven business solutions. You'll Need To Have Bachelor's degree with four or more years of relevant work experience. Expertise in advanced analytics/ predictive modelling in a consulting role. Experience with all phases of end-to-end Analytics project Hands-on programming expertise in Python (with libraries like NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch), R (for specific data analysis tasks) Knowledge of Machine Learning Algorithms - Linear Regression, Logistic Regression, Decision Trees, Random Forests, Support Vector Machines (SVMs), Neural Networks (Deep Learning), Bayesian Networks Data Engineering - Data Cleaning and Preprocessing, Feature Engineering, Data Transformation, Data Visualization Cloud Platforms - AWS SageMaker, Azure Machine Learning, Cloud AI Platform Even better if you have one or more of the following: Advanced degree in Computer Science, Data Science, Machine Learning, or a related field. Knowledge on Home domain with key areas like Smart Home, Digital security and wellbeing Experience with stream-processing systems: Spark-Streaming, Storm etc. #TPDNONCDIO Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Locations Hyderabad, India Chennai, India

Posted 1 month ago

Apply

0 years

10 - 12 Lacs

India

Remote

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),data,feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance

Posted 1 month ago

Apply

0 years

10 - 12 Lacs

India

Remote

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,apache spark,aws,ml,langchain,shell scripting,kafka,performance testing,mlops,pandas,knowledge-graph design (rdf/owl/sparql),sql,data,feature engineering,nosql,delta lake,ci/cd,python,aws services (sagemaker, bedrock, lambda),pyspark,synthetic-data augmentation,generative ai,data-cataloging,metadata management,databricks,git,lineage,data governance,azure

Posted 1 month ago

Apply

6.0 years

15 - 17 Lacs

India

Remote

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the intersection of Artificial Intelligence, Cloud Infrastructure, and Enterprise SaaS , we create data-driven products that power decision-making for Fortune 500 companies and high-growth tech firms. Our multidisciplinary teams ship production-grade generative-AI and Retrieval-Augmented Generation (RAG) solutions that transform telecom, finance, retail, and healthcare workflows—without compromising on scale, security, or speed. Role & Responsibilities Build & ship LLM/RAG solutions: design, train, and productionize advanced ML and generative-AI models (GPT-family, T5) that unlock new product capabilities. Own data architecture: craft schemas, ETL/ELT pipelines, and governance processes to guarantee high-quality, compliant training data on AWS. End-to-end MLOps: implement CI/CD, observability, and automated testing (Robot Framework, JMeter, XRAY) for reliable model releases. Optimize retrieval systems: engineer vector indices, semantic search, and knowledge-graph integrations that deliver low-latency, high-relevance results. Cross-functional leadership: translate business problems into measurable ML solutions, mentor junior scientists, and drive sprint ceremonies. Documentation & knowledge-sharing: publish best practices and lead internal workshops to scale AI literacy across the organization. Skills & Qualifications Must-Have – Technical Depth: 6 + years building ML pipelines in Python; expert in feature engineering, evaluation, and AWS services (SageMaker, Bedrock, Lambda). Must-Have – Generative AI & RAG: proven track record shipping LLM apps with LangChain or similar, vector databases, and synthetic-data augmentation. Must-Have – Data Governance: hands-on experience with metadata, lineage, data-cataloging, and knowledge-graph design (RDF/OWL/SPARQL). Must-Have – MLOps & QA: fluency in containerization, CI/CD, and performance testing; ability to embed automation within GitLab-based workflows. Preferred – Domain Expertise: background in telecom or large-scale B2B platforms where NLP and retrieval quality are mission-critical. Preferred – Full-Stack & Scripting: familiarity with Angular or modern JS for rapid prototyping plus shell scripting for orchestration. Benefits & Culture Highlights High-impact ownership: green-field autonomy to lead flagship generative-AI initiatives used by millions. Flex-first workplace: hybrid schedule, generous learning stipend, and dedicated cloud credits for experimentation. Inclusive, data-driven culture: celebrate research publications, OSS contributions, and diverse perspectives while solving hard problems together. Skills: data,modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance

Posted 1 month ago

Apply

0 years

10 - 12 Lacs

India

Remote

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),data,feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance

Posted 1 month ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Join our Team About this opportunity: We are seeking a highly skilled and motivated AI Engineer to join our growing AI/ML team. The ideal candidate will be responsible for developing, deploying, and scaling machine learning and GenAI (LLM-based) solutions across business domains. You will work closely with data scientists, data engineers, and business stakeholders to turn innovative ideas into impactful solutions using state-of-the-art cloud technologies and MLOps practices. What you will do: Develop, Train, Test, and Deploy Machine Learning and GenAI-LLM models. Collect, Clean, and preprocess large-scale datasets for AI/ML training and evaluation. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Design and implement scalable AI services and pipelines using Python and Cloud technologies (e.g., Azure, AWS).Continuously improve model performance through tuning, optimization, and retraining. Knowledge of MLOps practices to use IaC to deploy models into production and industrialising the business solution The skills you bring: Strong expertise in AWS services: Glue, SageMaker, Lambda, CloudWatch, S3, IAM, etc. Solid programming skills in Python and experience with PySpark for large-scale data processing. Experience with DevOps/MLOps tools such as Azure DevOps, GitHub Actions. Experience & Education: Bachelor’s or Master’s in Computer Science, Data Science, AI, or related field. 5–10 years of experience in ML/AI model development and deployment. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 769126

Posted 1 month ago

Apply

0 years

0 Lacs

Borivali, Maharashtra, India

On-site

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Karnataka Job ID: A2941027

Posted 1 month ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Position: Head of AI Cybersecurity Key Responsibilities: Define and lead Protectt.ai’s AI cybersecurity strategy, setting vision, priorities, and execution roadmap. Build and manage the AI security function, including team development and cross-functional alignment. Identify and mitigate threats across the AI/ML lifecycle; implement threat modeling, red teaming, and secure development practices. Collaborate with AI Research and Engineering teams to ensure secure model development, deployment, and monitoring. Establish internal governance and external trust frameworks covering transparency, fairness, and responsible AI use. Drive adoption of privacy-preserving technologies and lead security assessments, audits, and incident response for AI systems. Represent industry forums, standards bodies, and customer security engagements. Continuously monitor and act on emerging AI security threats and research to maintain industry leadership. Qualifications: Bachelor's or Master’s in Computer Science, Cybersecurity, AI, or related field (PhD preferred). 8+ years in cybersecurity, including 3+ years focused on AI/ML systems or adversarial machine learning. Deep understanding of risks across AI/ML systems, including model threats, data vulnerabilities, and privacy risks. Proven leadership in secure architecture design and large-scale AI threat mitigation. Hands-on with tools such as IBM ART, CleverHans, SecML, or PrivacyRaven. Familiarity with security standards (NIST AI RMF, ISO/IEC 42001, OWASP Top 10 for LLMs). Strong communication skills across technical and executive levels. Preferred Skills: Experience with cloud ML platforms (AWS SageMaker, Vertex AI, Azure ML). Knowledge of adversarial ML defense and secure model deployment. Contributions to AI security research or open-source tools. Understanding of regulations (EU AI Act, GDPR) and responsible AI frameworks. Experience working with enterprise security, compliance, and audit functions. If the role interests you, feel free to reply here or email your updated CV to nidhi.parikh@antal.com

Posted 1 month ago

Apply

4.0 years

0 Lacs

Chandigarh

On-site

Job Summary We are looking for a skilled and motivated AI Engineer to join our team. The ideal candidate will have a strong foundation in Python, machine learning frameworks, and data science libraries. You will be responsible for developing, training, and deploying cutting-edge machine learning models, including applications in NLP, computer vision, and other AI domains. Key Responsibilities Develop and deploy machine learning models into production environments Train and fine-tune models using large and diverse datasets Implement AI techniques such as natural language processing (NLP), computer vision, and deep learning Collaborate with data scientists, ML engineers, and software developers to optimize model performance and scalability Utilize cloud-based AI services for scalable deployment and model management Required Skills & Qualifications 4+ year’s in Python and machine learning frameworks like TensorFlow or PyTorch 2+ year's experience with data science libraries such as NumPy, Pandas, and Scikit-learn 2+ year's experience of supervised, unsupervised, and deep learning techniques Familiarity with cloud AI services (e.g., AWS SageMaker, Google AI Platform, Azure ML) Strong problem-solving skills and ability to work in a fast-paced environment Preferred Qualifications Experience with model monitoring and performance tuning in production Exposure to MLOps tools and CI/CD for ML pipelines Understanding of model explainability and ethical AI practices Why Join Us Build with Purpose: Work on impactful, high-scale products that solve real problems using cutting-edge technologies. Tech-First Culture: Join a team where engineering is at the core — we prioritize clean code, scalability, automation, and continuous learning. Freedom to Innovate: You’ll have ownership from day one — with room to experiment, influence architecture, and bring your ideas to life. Collaborate with the Best: Work alongside passionate engineers, product thinkers, and designers who value clarity, speed, and technical excellence. Paladin Tech is an equal opportunity employer. We are committed to creating an inclusive and diverse workplace and welcome candidates of all backgrounds and identities. Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

0 years

12 - 18 Lacs

Hyderābād

Remote

Job Description: About the Role: Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What you’ll do here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you’ll need to succeed Must have skills: Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills: Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Skills Required "Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS" Job Type: Contractual / Temporary Contract length: 12 months Pay: ₹100,000.00 - ₹150,000.00 per month Location Type: Hybrid work Schedule: Day shift Work Location: Hybrid remote in Hyderabad, Telangana

Posted 1 month ago

Apply

3.0 years

3 - 4 Lacs

Mohali

Remote

Job Title: Pre-Sales Technical Business Analyst (AI/ML & MERN Stack) Location: [Your Location / Remote] Job Type: Full-time | Pre-Sales | Technical Consulting About the Role: We are seeking a dynamic Pre-Sales Technical Business Analyst with a strong foundation in AI/ML solutions , MERN stack technologies , and API integration . This role bridges the gap between clients’ business requirements and our technical solutions, playing a pivotal role in shaping proposals, leading product demos, and translating client needs into technical documentation and strategic solutions. Key Responsibilities: Client Engagement: Collaborate with the sales team to understand client requirements, pain points, and objectives. Participate in discovery calls, solution walkthroughs, and RFP/RFI responses. Solution Design & Technical Analysis: Analyze and document business needs, converting them into detailed technical requirements. Propose architectural solutions using AI/ML models and the MERN stack (MongoDB, Express.js, React.js, Node.js) . Provide input on data pipelines, model training, and AI workflows where needed. Technical Presentations & Demos: Prepare and deliver compelling demos and presentations for clients. Act as a technical expert during pre-sales discussions to communicate the value of proposed solutions. Documentation & Proposal Support: Draft technical sections of proposals, SoWs, and functional specs. Create user flows, diagrams, and system interaction documents. Collaboration: Work closely with engineering, product, and delivery teams to ensure alignment between business goals and technical feasibility. Conduct feasibility analysis and risk assessments on proposed features or integrations. Required Skills & Experience: 3+ years in a Business Analyst or Pre-Sales Technical Consultant role. Proven experience in AI/ML workflows (understanding of ML lifecycle, model deployment, data prep). Strong technical knowledge of the MERN stack – including RESTful APIs, database schema design, and frontend/backend integration. Solid understanding of API design , third-party integrations, and system interoperability. Ability to translate complex technical concepts into simple business language. Hands-on experience with documentation tools like Swagger/Postman for API analysis. Proficient in writing user stories, business cases, and technical specifications. Preferred Qualifications: Exposure to cloud platforms (AWS, Azure, GCP) and ML platforms (SageMaker, Vertex AI, etc.). Experience with Agile/Scrum methodologies. Familiarity with AI use cases like recommendation systems, NLP, predictive analytics. Experience with data visualization tools or BI platforms is a plus. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹35,000.00 per month Schedule: Day shift Fixed shift Monday to Friday Work Location: In person

Posted 1 month ago

Apply

5.0 years

2 - 10 Lacs

Calcutta

On-site

Kolkata,West Bengal,India +1 more Job ID 769126 Join our Team About this opportunity: We are seeking a highly skilled and motivated AI Engineer to join our growing AI/ML team. The ideal candidate will be responsible for developing, deploying, and scaling machine learning and GenAI (LLM-based) solutions across business domains. You will work closely with data scientists, data engineers, and business stakeholders to turn innovative ideas into impactful solutions using state-of-the-art cloud technologies and MLOps practices. What you will do: Develop, Train, Test, and Deploy Machine Learning and GenAI-LLM models. Collect, Clean, and preprocess large-scale datasets for AI/ML training and evaluation. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Design and implement scalable AI services and pipelines using Python and Cloud technologies (e.g., Azure, AWS).Continuously improve model performance through tuning, optimization, and retraining. Knowledge of MLOps practices to use IaC to deploy models into production and industrialising the business solution The skills you bring: Strong expertise in AWS services: Glue, SageMaker, Lambda, CloudWatch, S3, IAM, etc. Solid programming skills in Python and experience with PySpark for large-scale data processing. Experience with DevOps/MLOps tools such as Azure DevOps, GitHub Actions. Experience & Education: Bachelor’s or Master’s in Computer Science, Data Science, AI, or related field. 5–10 years of experience in ML/AI model development and deployment. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Minimum of (4+) years of experience in AI-based application development. Fine-tune pre-existing models to improve performance and accuracy. Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. Proficiency in Python with the ability to get hands-on with coding at a deep level. Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) Enthusiasm for continuous learning and professional developement in AI and leated technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Knowledge of cloud services like AWS, Google Cloud, or Azure. Proficiency with version control systems, especially Git. Familiarity with data pre-processing techniques and pipeline development for Al model training. Experience with deploying models using Docker, Kubernetes Experience with AWS Bedrock, and Sagemaker is a plus Strong problem-solving skills with the ability to translate complex business problems into Al solutions.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

As a trusted global transformation partner, Welocalize accelerates the global business journey by enabling brands and companies to reach, engage, and grow international audiences. Welocalize delivers multilingual content transformation services in translation, localization, and adaptation for over 250 languages with a growing network of over 400,000 in-country linguistic resources. Driving innovation in language services, Welocalize delivers high-quality training data transformation solutions for NLP-enabled machine learning by blending technology and human intelligence to collect, annotate, and evaluate all content types. Our team works across locations in North America, Europe, and Asia serving our global clients in the markets that matter to them. www.welocalize.com To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Job Reference: Role Summary: The Machine Learning R&D Engineer role is responsible for the design, development and implementation of machine learning solutions to serve our organization. This includes ownership or oversight of projects from conception to deployment with appropriate cloud services. The role also includes responsibility for following best practices with which to optimize and measure the performance of our models and algorithms against business goals. Tasks and Responsibilities: Machine learning model research and development: design, develop and deploy machine learning models for localization and business workflow processes, including machine translation and quality assurance. Utilize appropriate metrics to evaluate model performance and iterate accordingly Ensure code quality, write robust, well-documented, and structured Python code Define and design solutions to machine learning problems. Work closely with cross-functional teams to understand business requirements and design solutions that meet those needs Explain complex technical concepts clearly to non-technical stakeholders Mentorship: Guide junior team members and contribute to a collaborative team environment Success indicators of a Machine Learning R&D Engineer: Effective Model Development: success is evident when the models developed are accurate, efficient, and align with project requirements Positive Team Collaboration: demonstrated ability to collaborate effectively with various teams and stakeholders, contributing positively to project outcomes Continuous Learning and Improvement: a commitment to continuous learning and applying new techniques to improve existing models and processes Clear Communication: ability to articulate findings, challenges, and insights to a range of stakeholders, ensuring understanding and appropriate Skills and Knowledge Excellent, in depth understanding of machine learning concepts and methodologies, including supervised and unsupervised learning, deep learning, classification Hands-on experience with natural language processing (NLP) techniques and tools Ability to write robust, production-grade code in Python Excellent communication and documentation skills. Able to explain complex technical concepts to non-technical stakeholders Experience taking ownership of projects from conception to deployment. Ability to transform business needs to solutions Nice to have: Experience using Large Language Models in production High proficiency with machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn Hands-on experience with AWS technologies including EC2, S3, and other deployment strategies. Experience with SNS, Sagemaker a plus Experience with ML management technologies and deployment techniques, such as AWS ML offerings, Docker, GPU deployments, etc Education and Experience Bachelor’s degree in Computer Science, AI/ML, or related field (Master’s/PhD preferred) 6+ years of experience in AI/ML research and development

Posted 1 month ago

Apply

15.0 years

0 Lacs

Nagpur, Maharashtra, India

On-site

Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

On-site

Job Type: Part-Time (Hourly Basis) Job Summary: We are seeking a highly skilled and motivated AI Trainer with expertise in Artificial Intelligence, Natural Language Processing (NLP) , Generative AI, and AWS AI Services . The trainer will be responsible for delivering structured training sessions to students, helping them gain conceptual and practical knowledge of modern AI concepts and tools. Key Responsibilities: 1. Training Delivery  Design and deliver comprehensive training modules on: o Introduction to AI (history, key concepts, applications, ethics). o AWS AI Services (e.g., Amazon SageMaker, Comprehend, Recognition, Lex, Polly). o Natural Language Processing (NLP) using AWS and open-source libraries. o AI Search Techniques and Rule-Based Systems. o Introduction to Generative AI (foundation models, prompt engineering, LLMs on AWS).  Conduct interactive lectures, hands-on labs, and assessments.  Adapt training delivery to suit learners of varying technical backgrounds. 2. Hands-on Projects and Labs  Guide learners through real-world projects using: o AWS AI/ML tools (e.g., SageMaker, Bedrock, Amazon Kendra). o NLP libraries (e.g., NLTK, spaCy, Hugging Face Transformers). o Search and rule-based AI techniques.  Support practical implementation and debugging of projects . Required Qualifications: Bachelor's or master’s degree in computer science, Data Science, or related field. Experience: Proven experience (2–5 years) in teaching or working with: o AI and ML frameworks o AWS cloud services o NLP and Generative AI tools  Certification in AWS (e.g., AWS Certified Machine Learning – Specialty) is a big plus. Technical Skills:  Proficient in Python and related libraries (NumPy, Pandas, scikit-learn, etc.)  Strong understanding of NLP (tokenization, entity recognition, sentiment analysis)  Familiarity with rule-based systems and AI search algorithms (DFS, BFS, A* etc.)  Experience with LLMs, prompt engineering, and tools like Amazon Bedrock  Cloud experience on AWS AI/ML stack including: o Amazon SageMaker o AWS Comprehend o Amazon Lex and Polly o AWS Bedrock (for generative AI) Desirable:  Prior experience in educational institutions or corporate training  Contribution to open-source or AI research  Knowledge of ethical AI, fairness, and bias mitigation techniques

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies