Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
0 Lacs
Chandigarh
On-site
bebo Technologies is a leading complete software solution provider. bebo stands for 'be extension be offshore'. We are a business partner of QASource, inc. USA[www.QASource.com]. We offer outstanding services in the areas of software development, sustenance engineering, quality assurance and product support. bebo is dedicated to provide high-caliber offshore software services and solutions. Our goal is to 'Deliver in time-every time'. For more details visit our website: www.bebotechnologies.com Let's have a 360 tour of our bebo premises by clicking on below link: https://www.youtube.com/watch?v=S1Bgm07dPmMKey Required Skills: Bachelor's or Master’s degree in Computer Science, Data Science, or related field. 7–10 years of industry experience, with at least 5 years in machine learning roles. Advanced proficiency in Python and common ML libraries: TensorFlow, PyTorch, Scikit-learn. Experience with distributed training, model optimization (quantization, pruning), and inference at scale. Hands-on experience with cloud ML platforms: AWS (SageMaker), GCP (Vertex AI), or Azure ML. Familiarity with MLOps tooling: MLflow, TFX, Airflow, or Kubeflow; and data engineering frameworks like Spark, dbt, or Apache Beam. Strong grasp of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay). Excellent problem-solving, communication, and documentation skills.
Posted 15 hours ago
5.0 years
4 - 8 Lacs
Hyderābād
On-site
Job Information Date Opened 07/31/2025 Job Type Full time Work Experience 5+ years Industry Technology City Hyderabad State/Province Telangana Country India Zip/Postal Code 500032 Job Description About Kanerika: Who we are: Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Awards and Recognitions Kanerika has won several awards over the years, including: CMMI Level 3 Appraised in 2024. Best Place to Work 2022 & 2023 by Great Place to Work®. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA today. NASSCOM Emerge 50 Award in 2014. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture. Recognized for ISO 27701, 27001, SOC2, and GDPR compliances. Featured as Top Data Analytics Services Provider by GoodFirms. Working for us Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees. Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika. Locations We are located in Austin (USA), Singapore, Hyderabad, Indore and Ahmedabad (India). Job Location: Hyderabad, Indore and Ahmedabad (India) Requirements Key Responsibilities: Lead the design and development of AI-driven applications, particularly focusing on RAG-based chatbot solutions. Architect robust solutions leveraging Python and Java to ensure scalability, reliability, and maintainability. Deploy, manage, and scale AI applications using AWS cloud infrastructure, optimizing performance and resource utilization. Collaborate closely with cross-functional teams to understand requirements, define project scopes, and deliver solutions effectively. Mentor team members, providing guidance on best practices in software development, AI methodologies, and cloud deployments. Ensure solutions meet quality standards, including thorough testing, debugging, performance tuning, and documentation. Continuously research emerging AI technologies and methodologies to incorporate best practices and innovation into our products. - Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, Mathematics, Statistics or related fields. At least 5 years of professional experience in AI/Machine Learning engineering. Strong programming skills in Python and Java. Demonstrated hands-on experience building Retrieval-Augmented Generation (RAG)-based chatbots or similar generative AI applications. Proficiency in cloud platforms, particularly AWS, including experience with EC2, Lambda, SageMaker, DynamoDB, CloudWatch, and API Gateway. Solid understanding of AI methodologies, including natural language processing (NLP), vector databases, embedding models, and large language model integrations. Experience with leading projects or teams, managing technical deliverables, and ensuring high-quality outcomes. AWS certifications (e.g., AWS Solutions Architect, AWS Machine Learning Specialty). Familiarity with popular AI/ML frameworks and libraries such as Hugging Face Transformers, TensorFlow, PyTorch, LangChain, or similar. Experience in Agile development methodologies. Excellent communication skills, capable of conveying complex technical concepts clearly and effectively. Strong analytical and problem-solving capabilities, with the ability to navigate ambiguous technical challenges. Benefits Employee Benefits 1. Culture: a. Open Door Policy: Encourages open communication and accessibility to management. b. Open Office Floor Plan: Fosters a collaborative and interactive work environment. c. Flexible Working Hours: Allows employees to have flexibility in their work schedules. d. Employee Referral Bonus: Rewards employees for referring qualified candidates. e. Appraisal Process Twice a Year: Provides regular performance evaluations and feedback. 2. Inclusivity and Diversity: a. Hiring practices that promote diversity: Ensures a diverse and inclusive workforce. b. Mandatory POSH training: Promotes a safe and respectful work environment. 3. Health Insurance and Wellness Benefits: a. GMC and Term Insurance: Offers medical coverage and financial protection. b. Health Insurance: Provides coverage for medical expenses. c. Disability Insurance: Offers financial support in case of disability. 4. Child Care & Parental Leave Benefits: a. Company-sponsored family events: Creates opportunities for employees and their families to bond. b. Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child. c. Family Medical Leave: Offers leave for employees to take care of family members' medical needs. 5. Perks and Time-Off Benefits: a. Company-sponsored outings: Organizes recreational activities for employees. b. Gratuity: Provides a monetary benefit as a token of appreciation. c. Provident Fund: Helps employees save for retirement. d. Generous PTO: Offers more than the industry standard for paid time off. e. Paid sick days: Allows employees to take paid time off when they are unwell. f. Paid holidays: Gives employees paid time off for designated holidays. g. Bereavement Leave: Provides time off for employees to grieve the loss of a loved one. 6. Professional Development Benefits: a. L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development. b. Mentorship Program: Offers guidance and support from experienced professionals. c. Job Training: Provides training to enhance job-related skills. d. Professional Certification Reimbursements: Assists employees in obtaining professional certifications. e. Promote from Within: Encourages internal growth and advancement opportunities.
Posted 16 hours ago
10.0 years
8 - 10 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Senior Software Engineer-MLOps We are looking for a highly skilled Senior Software Engineer – MLOps with deep expertise in building and managing production-grade ML pipelines in AWS and Azure cloud environments. This role requires a strong foundation in software engineering, DevOps principles, and ML model lifecycle automation to enable reliable and scalable machine learning operations across the organization Key Responsibilities include: Design and build robust MLOps pipelines for model training, validation, deployment, and monitoring Automate workflows using CI/CD tools such as GitLab Actions, Azure DevOps, Jenkins, or Argo Workflows Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Design secure and cost-efficient ML architecture leveraging cloud-native services Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Implement cost optimization and performance tuning for cloud workloads Package ML models using Docker, and orchestrate deployments with Kubernetes on EKS/AKS Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Integrate observability tools for model performance, drift detection, and lineage tracking (e.g., Fiddler, MLflow, Prometheus, Grafana, Azure Monitor, CloudWatch) Ensure model reproducibility, versioning, and compliance with audit and regulatory requirements Collaborate with data scientists, software engineers, DevOps, and cloud architects to operationalize AI/ML use cases Mentor junior MLOps engineers and evangelize MLOps best practices across teams Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 10 years in Devops, with 2+ years in MLOps. Proficient with MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Experience with feature stores (e.g., Feast), model registries, and experiment tracking. Proficiency in Devops & MLOps, Automation Cloud formation/Teraform/BICEP Requisition ID: 610750 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 16 hours ago
7.0 years
8 - 10 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities include: Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 16 hours ago
8.0 years
4 - 6 Lacs
Noida
On-site
Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Analytics at Innovaccer Our analytics team is dedicated to weaving analytics and data science magics across our products. They are the owners and custodians of intelligence behind our products. With their expertise and innovative approach, they play a crucial role in building various analytical models (including descriptive, predictive, and prescriptive) to help our end-users make smart decisions. Their focus on continuous improvement and cutting-edge methodologies ensures that they're always creating market leading solutions that propel our products to new heights of success. About the Role Data is the foundation of our innovation. We are seeking a Manager, Data Science with expertise in NLP and Generative AI to lead the development of cutting-edge AI-driven solutions in healthcare. This role requires a deep understanding of healthcare data and the ability to design and implement advanced language models that extract insights, automate workflows, and enhance clinical decision-making. We’re looking for a visionary leader who can define and build the next generation of AI-driven tools, leveraging LLMs, deep learning, and predictive analytics to personalize care based on patients’ clinical and behavioral history. If you’re passionate about pushing the boundaries of AI in healthcare, we’d love to hear from you! A Day in the Life Team Leadership & Development: Build, mentor, and manage a team of data scientists, and machine learning engineers. Foster a culture of collaboration, innovation, and technical excellence. Roadmap Execution: Define and execute on the quarterly AI/ML roadmap, setting clear goals, priorities, and deliverables for the team. Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. Define technical architecture to productize Innovaccer’s machine-learning algorithms and take them to market with partnerships with different organization. Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows. Project & Stakeholder Management: Work closely with cross-functional teams, including product managers, engineers, and business leaders, to align AI/ML initiatives with company objectives. What You Need Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) 8+ years of experience in Data Science (healthcare experience will be a plus) Strong experience with deep learning techniques to build NLP/Computer vision models as well as state of art GenAI pipelines - Has demonstrable experience deploying deep learning models in production at scale with interactive improvements- would require hands-on expertise with at least 1 deep learning frameworks like Pytorch or Tensorflow. Strong hands-on experience in building GenAI applications - building LLM based workflows along with optimization techniques - knowledge of implementing agentic workflows is a plus. Has keen interest in research and stays updated with key advancements in the area of AI and ML in the industry. Having patents/publications in any area of AI/ML is a great add on. Hands on experience with at least one ML platform among Databricks, Azure ML, Sagemaker s Strong written and spoken communication skills We offer competitive benefits to set you up for success in and outside of work. Here’s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children* : Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details.
Posted 16 hours ago
0 years
0 Lacs
India
On-site
The candidate should have experience in AI Development including experience in developing, deploying, and optimizing AI and Generative AI solutions. The ideal candidate will have a strong technical background, hands-on experience with modern AI tools and platforms, and a proven ability to build innovative applications that leverage advanced AI techniques. You will work collaboratively with cross-functional teams to deliver AI-driven products and services that meet business needs and delight end-users. Key Prerequisites Experience in AI and Generative AI Development Experience in Design, develop, and deploy AI models for various use cases, such as predictive analytics, recommendation systems, and natural language processing (NLP). Experience in Building and fine-tuning Generative AI models for applications like chatbots, text summarization, content generation, and image synthesis. Experience in implementation and optimization of large language models (LLMs) and transformer-based architectures (e.g., GPT, BERT). Experience in ingestion and cleaning of data Feature Engineering and Data Engineering Experience in Design and implementation of data pipelines for ingesting, processing, and storing large datasets. Experience in Model Training and Optimization Exposure to deep learning models and fine-tuning pre-trained models using frameworks like TensorFlow, PyTorch, or Hugging Face. Exposure to optimization of models for performance, scalability, and cost efficiency on cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI). Hands-on experience in monitoring and improving model performance through retraining and evaluation metrics like accuracy, precision, and recall. AI Tools and Platform Expertise OpenAI, Hugging Face MLOps tools Generative AI-specific tools and libraries for innovative applications. Technical Skills 1. Strong programming skills in Python (preferred) or other languages like Java, R, or Julia. 2. Expertise in AI frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. 3. Proficiency in working with transformer-based models (e.g., GPT, BERT, T5, DALL-E). 4. Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization tools (Docker, Kubernetes). 5. Solid understanding of databases (SQL, NoSQL) and big data processing tools (Spark, Hadoop).
Posted 16 hours ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Join us and drive the design and deployment of AI/ML frameworks revolutionizing telecom services. As a key member of our team, you will architect and build scalable, secure AI systems for service assurance, orchestration, and fulfillment, working directly with network experts to drive business impact. You will be responsible for defining architecture blueprints, selecting the right tools and platforms, and guiding cross-functional teams to deliver scalable AI systems. This role offers significant growth potential, mentorship opportunities, and the chance to shape the future of telecoms using the latest AI technologies and platforms. Key Responsibilities HOW YOU WILL CONTRIBUTE AND WHAT YOU WILL LEARN Design end-to-end AI architecture tailored to telecom services business functions (e.g., Service assurance, Orchestration and Fulfilment). Define data strategy and AI workflows including Inventory Model, ETL, model training, deployment, and monitoring. Evaluate and select AI platforms, tools, and frameworks suited for telecom-scale workloads for development and testing of Inventory services solutions Work closely with telecom network experts and Architects to align AI initiatives with business goals. Ensure scalability, performance, and security in AI systems across hybrid/multi-cloud environments. Mentor AI developers Key Skills And Experience You have: 10+ years' experience in AI/ML design and deployment with a Graduation or equivalent degree. Practical Experience on AI/ML techniques and scalable architecture design for telecom operations, inventory management, and ETL. Exposure to data platforms (Kafka, Spark, Hadoop), model orchestration (Kubeflow, MLflow), and cloud-native deployment (AWS Sagemaker, Azure ML). Proficient in programming (Python, Java) and DevOps/MLOps best practices. It will be nice if you had: Worked with any of the LLM models (llama family) and LLM agent frameworks like LangChain / CrewAI / AutoGen Familiarity with telecom protocols, OSS/BSS platforms, 5G architecture, and NFV/SDN concepts. Excellent communication and stakeholder management skills. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.
Posted 17 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Senior Machine Learning Engineer-(MLE + ReactJS/AngularJS) Location: Hyderabad (Hybrid Mode) Availability: Immediate joiners or 30 days notice preferred Role Overview We are looking for a dynamic professional to join our team as a Senior Machine Learning Engineer with Machine Learning and ReactJS/AngularJS expertise . This is a hybrid role that involves front-end development and machine learning/data science responsibilities. The ideal candidate will bridge the gap between intuitive UI design and intelligent ML-powered backend systems, primarily in a banking domain environment. Key Responsibilities Design and develop responsive, user-friendly web interfaces . Collaborate with data scientists to build, integrate, and deploy ML models into production environments. Own the complete development and deployment lifecycle for ML-powered features, including UI integration. Build and maintain scalable, cloud-based solutions using AWS services. Work within an agile setup and coordinate closely with both engineering and data science teams. Preferred Candidate Profile Proven expertise in Front-end development using JavaScript/React (or similar frameworks) Machine Learning using Python , Scikit-learn , TensorFlow , or PyTorch Deployment pipelines, Docker , and CI/CD practices Cloud services, especially AWS (Lambda, S3, SageMaker preferred)
Posted 17 hours ago
12.0 years
0 Lacs
India
On-site
We are seeking a highly skilled and experienced AWS Architect with a strong background in Data Engineering and expertise in Generative AI. In this pivotal role, you will be responsible for designing, building, and optimizing scalable, secure, and cost-effective data solutions that leverage the power of AWS services, with a particular focus on integrating and managing Generative AI capabilities. The ideal candidate will possess a deep understanding of data architecture principles, big data technologies, and the latest advancements in Generative AI, including Large Language Models (LLMs) and Retrieval Augmented Generation (RAG). You will work closely with data scientists, machine learning engineers, and business stakeholders to translate complex requirements into robust and innovative solutions on the AWS platform. Responsibilities: • Architect and Design: Lead the design and architecture of end-to-end data platforms and pipelines on AWS, incorporating best practices for scalability, reliability, security, and cost optimization. • Generative AI Integration: Architect and implement Generative AI solutions using AWS services like Amazon Bedrock, Amazon SageMaker, Amazon Q, and other relevant technologies. This includes designing RAG architectures, prompt engineering strategies, and fine-tuning models with proprietary data (knowledge base). • Data Engineering Expertise: Design, build, and optimize ETL/ELT processes for large-scale data ingestion, transformation, and storage using AWS services such as AWS Glue, Amazon S3, Amazon Redshift, Amazon Athena, Amazon EKS and Amazon EMR. • Data Analytics: Design, build, and optimize analytical solutions for large-scale data ingestion, analytics and insights using AWS services such as AWS Quicksight • Data Governance and Security: Implement robust data governance, data quality, and security measures, ensuring compliance with relevant regulations and industry best practices for both traditional data and Generative AI applications. • Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and Generative AI workloads, ensuring efficient resource utilization and optimal response times. • Technical Leadership: Act as a subject matter expert and provide technical guidance to data engineers, data scientists, and other team members. Mentor and educate on AWS data and Generative AI best practices. • Collaboration: Work closely with cross-functional teams, including product owners, data scientists, and business analysts, to understand requirements and deliver impactful solutions. • Innovation and Research: Stay up-to-date with the latest AWS services, data engineering trends, and advancements in Generative AI, evaluating and recommending new technologies to enhance our capabilities. • Documentation: Create comprehensive technical documentation, including architectural diagrams, design specifications, and operational procedures. • Cost Management: Monitor and optimize AWS infrastructure costs related to data and Generative AI workloads. Required Skills and Qualifications: • 12+ years of experience in data engineering, data warehousing, or big data architecture. • 5+ years of experience in an AWS Architect role, specifically with a focus on data. • Proven experience designing and implementing scalable data solutions on AWS. • Strong hands-on experience with core AWS data services, including: o Data Storage: Amazon S3, Amazon Redshift, Amazon DynamoDB, Amazon RDS o Data Processing: AWS Glue, Amazon EMR, Amazon EKS, AWS Lambda, Informatica o Data Analytic: Amazon Quicksight, Amazon Athena, Tableau o Data Streaming: Amazon Kinesis, AWS MSK o Data Lake: AWS Lake Formation • Strong competencies in Generative AI, including: o Experience with Large Language Models (LLMs) and Foundation Models (FMs). o Hands-on experience with Amazon Bedrock (including model customization, agents, and orchestrations). o Understanding and experience with Retrieval Augmented Generation (RAG) architectures and vector databases (e.g., Amazon OpenSearch Service for vector indexing). o Experience with prompt engineering and optimizing model responses. o Familiarity with Amazon SageMaker for building, training, and deploying custom ML/Generative AI models. o Knowledge of Amazon Q for business-specific Generative AI applications. • Proficiency in programming languages such as Python (essential), SQL, and potentially Scala or Java. • Experience with MLOps/GenAIOps principles and tools for deploying and managing Generative AI models in production. • Solid understanding of data modeling, data warehousing concepts, and data lake architectures. • Experience with CI/CD pipelines and DevOps practices on AWS. • Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. • Strong problem-solving and analytical abilities. Preferred Qualifications: • AWS Certified Solutions Architect – Professional or AWS Certified Data Engineer – Associate/Specialty. • Experience with other Generative AI frameworks (e.g., LangChain) or open-source LLMs. • Familiarity with containerization technologies like Docker and Kubernetes (Amazon EKS). • Experience with data transformation tools like Informatica, Matillion • Experience with data visualization tools (e.g., Amazon QuickSight, Tableau, Power BI). • Knowledge of data governance tools like Amazon DataZone. • Experience in a highly regulated industry (e.g., Financial Services, Healthcare).
Posted 19 hours ago
2.0 years
0 Lacs
India
On-site
The Role We are hiring an AI/ML Developer (India), to join our India team, in support of a large global client! You will be responsible for developing, deploying, and maintaining AI and machine learning models. Your expertise in Python, cloud services, databases, and big data technologies will be instrumental in creating scalable and efficient AI applications. What You Will Be Doing •Develop, train, and deploy machine learning models for predictive analytics, classification, and clustering. •Implement AI-based solutions using frameworks such as TensorFlow, PyTorch, and Scikit-learn. •Work with cloud platforms including AWS (SageMaker, Lambda, S3), Azure, and Google Cloud (Vertex AI). •Integrate and fine-tune Hugging Face transformer models (e.g., BERT, GPT) for NLP tasks such as text classification, summarization, and sentiment analysis. •Develop AI automation solutions, including chatbot implementations using Microsoft Teams and Azure AI. •Work with big data technologies such as Apache Spark and Snowflake for large-scale data processing and analytics. •Design and optimize ETL pipelines for data quality management, transformation, and validation. •Utilize SQL, MySQL, PostgreSQL, and MongoDB for database management and query optimization. •Create interactive data visualizations using Tableau and Power BI to drive business insights. •Work with Large Language Models (LLMs) for AI-driven applications, including fine-tuning, training, and deploying model for conversational AI, text generation, and summarization. •Develop and implement Agentic AI systems, enabling autonomous decision-making AI agents that can adapt, learn, and optimize tasks in real-time. What You Bring Along •2+ years of experience applying AI to practical uses. •Strong programming skills in Python, SQL, and experience with ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. •Knowledge of basic algorithms and object-oriented and functional design principles •Proficiency in using data analytics libraries like Pandas, NumPy, Matplotlib, and Seaborn. •Hands-on experience with cloud platforms such as AWS, Azure, and Google Cloud. •Experience with big data processing using Apache Spark and Snowflake. •Knowledge of NLP and AI model implementations using Hugging Face and cloud-based AI services. •Strong understanding of database management, query optimization, and data warehousing. •Experience with data visualization tools such as Tableau and Power BI. •Ability to work in a collaborative environment and adapt to new AI technologies. •Strong analytical and problem solving skills. Education: •Bachelor’s degree in computer science, Data Science, AI/ML, or a related field.
Posted 19 hours ago
10.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills and Qualifications: 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.
Posted 19 hours ago
7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities Include Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 19 hours ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Design, develop, and maintain high-performance ETL and real-time data pipelines using Apache Kafka and Apache Flink. Build scalable and automated MLOps pipelines for model training, validation, and deployment using AWS SageMaker and related services. Implement and manage Infrastructure as Code (IaC) using Terraform for AWS provisioning and maintenance. Collaborate with ML, Data Science, and DevOps teams to ensure reliable and efficient model deployment workflows. Optimize data storage and retrieval strategies for both structured and unstructured large-scale datasets. Integrate and transform data from multiple sources into data lakes and data warehouses. Monitor, troubleshoot, and improve performance of cloud-native data systems in a fast-paced production setup. Ensure compliance with data governance, privacy, and security standards across all data operations. Document data engineering workflows and architectural decisions for transparency and maintainability. Requirements 5+ Years of experience as Data Engineer or in similar role Proven experience in building data pipelines and streaming applications using Apache Kafka and Apache Flink. Strong ETL development skills, with deep understanding of data modeling and data architecture in large-scale environments. Hands-on experience with AWS services, including SageMaker, S3, Glue, Lambda, and CloudFormation or Terraform. Proficiency in Python and SQL; knowledge of Java is a plus, especially for streaming use cases. Strong grasp of MLOps best practices, including model versioning, monitoring, and CI/CD for ML pipelines. Deep knowledge of IaC tools, particularly Terraform, for automating cloud infrastructure. Excellent analytical and problem-solving abilities, especially with regard to data processing and deployment issues. Agile mindset with experience working in fast-paced, iterative development environments. Strong communication and team collaboration skills.
Posted 19 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Senior Engineer: Job Title: Senior Machine Learning & Data Engineer We are looking for a highly skilled and experienced Machine Learning & Data Engineer to join our team. This hybrid role blends the responsibilities of a data engineer and a machine learning engineer, with a strong emphasis on Python development. You will be instrumental in designing scalable data pipelines, building and deploying ML/NLP models, and enabling data-driven decision-making across the organization. Key Responsibilities Data Engineering & Infrastructure Design and implement robust ETL pipelines and data integration workflows using SQL, NoSQL, and big data technologies (e.g., Spark, Hadoop). Optimize data storage and retrieval using relational and non-relational databases (e.g., PostgreSQL, MongoDB, Cassandra). Ensure data quality, validation, and governance across systems. Develop and maintain data models and documentation for data flows and architecture. Machine Learning & NLP Build, fine-tune, and deploy ML/NLP models using frameworks like TensorFlow, PyTorch, and Scikit-learn. Apply advanced NLP techniques including Transformers, BERT, and LLM fine-tuning. Implement Retrieval-Augmented Generation (RAG) pipelines using LangChain, LlamaIndex, and vector databases (e.g., FAISS, Milvus). Operationalize ML models using APIs, model registries (e.g., Hugging Face), and cloud services (e.g., SageMaker, Azure ML). Python Development Develop scalable backend services using Python frameworks such as FastAPI, Flask, or Django. Automate data workflows and model training pipelines using Python libraries (e.g., Pandas, NumPy, SQLAlchemy). Collaborate with cross-functional teams to integrate ML solutions into production systems. Collaboration & Communication Work closely with data scientists, analysts, and software engineers in Agile/Scrum teams. Translate business requirements into technical solutions. Maintain clean, well-documented code and contribute to knowledge sharing. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience in both data engineering and machine learning roles. Strong Python programming skills and experience with modern Python libraries and frameworks. Deep understanding of ML/NLP concepts and practical experience with LLMs and RAG architectures. Proficiency in SQL and experience with both SQL and NoSQL databases. Experience with big data tools (e.g., Spark, PySpark) and cloud platforms (AWS, Azure). Familiarity with data visualization tools like Power BI or Tableau. Excellent problem-solving, communication, and collaboration skills. Engineer: Job Title : Automation Engineer Job Description We are seeking a skilled and experienced Automation Engineer to join our team. As a C#/Python Developer, you will play a pivotal role in developing and deploying advanced solutions to drive our Product Test automation. You will collaborate closely with Testers, product managers, and stakeholders to ensure the successful implementation and operation of Automation solutions. The ideal candidate will have a strong background in API development with C# programming and python, with experience in deploying scalable solutions. Responsibilities Design, develop, and maintain core APIs using mainly C#. Collaborate with cross-functional teams to understand requirements and implement API solutions. Create and execute unit tests for APIs to ensure software quality. Identify, analyze, and troubleshoot issues in API development and testing. Continuously improve and optimize API development processes. Document API specifications, procedures, and results. Stay updated with the latest industry trends and technologies in API development. Requirements Bachelor's degree in Computer Science, Engineering, or related field. Proven experience in developing APIs and scripts/apps using C# and python. Knowledge in python is a plus. Experience in using visual studio for development Experience in wireless domain will be a plus Strong understanding of software testing principles and methodologies. Proficiency in C# programming language. Experience with Test Automation tools and best practices Familiarity with CI/CD pipelines and version control systems (e.g., Perforce). Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 20 hours ago
7.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We’re looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Requirements 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. Strong hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.
Posted 21 hours ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a visionary AI Architect to lead the design and integration of cutting-edge AI systems, including Generative AI , Large Language Models (LLMs) , multi-agent orchestration , and retrieval-augmented generation (RAG) frameworks. This role demands a strong technical foundation in machine learning, deep learning, and AI infrastructure, along with hands-on experience in building scalable, production-grade AI systems on the cloud. The ideal candidate combines architectural leadership with hands-on proficiency in modern AI frameworks, and can translate complex business goals into innovative, AI-driven technical solutions. Primary Stack & Tools: Languages : Python, SQL, Bash ML/AI Frameworks : PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers GenAI & LLM Tooling : OpenAI APIs, LangChain, LlamaIndex, Cohere, Claude, Azure OpenAI Agentic & Multi-Agent Frameworks : LangGraph, CrewAI, Agno, AutoGen Search & Retrieval : FAISS, Pinecone, Weaviate, Elasticsearch Cloud Platforms : AWS, GCP, Azure (preferred: Vertex AI, SageMaker, Bedrock) MLOps & DevOps : MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines, Terraform, FAST API Data Tools : Snowflake, BigQuery, Spark, Airflow Key Responsibilities: Architect scalable and secure AI systems leveraging LLMs , GenAI , and multi-agent frameworks to support diverse enterprise use cases (e.g., automation, personalization, intelligent search). Design and oversee implementation of retrieval-augmented generation (RAG) pipelines integrating vector databases, LLMs, and proprietary knowledge bases. Build robust agentic workflows using tools like LangGraph , CrewAI , or Agno , enabling autonomous task execution, planning, memory, and tool use. Collaborate with product, engineering, and data teams to translate business requirements into architectural blueprints and technical roadmaps. Define and enforce AI/ML infrastructure best practices , including security, scalability, observability, and model governance. Manage technical road-map, sprint cadence, and 3–5 AI engineers; coach on best practices. Lead AI solution design reviews and ensure alignment with compliance, ethics, and responsible AI standards. Evaluate emerging GenAI & agentic tools; run proofs-of-concept and guide build-vs-buy decisions. Qualifications: 10+ years of experience in AI/ML engineering or data science, with 3+ years in AI architecture or system design. Proven experience designing and deploying LLM-based solutions at scale, including fine-tuning , prompt engineering , and RAG-based systems . Strong understanding of agentic AI design principles , multi-agent orchestration , and tool-augmented LLMs . Proficiency with cloud-native ML/AI services and infrastructure design across AWS, GCP, or Azure. Deep expertise in model lifecycle management, MLOps, and deployment workflows (batch, real-time, streaming). Familiarity with data governance , AI ethics , and security considerations in production-grade systems. Excellent communication and leadership skills, with the ability to influence technical and business stakeholders.
Posted 22 hours ago
5.0 years
0 Lacs
India
Remote
Position: AI Developer Experience: 5+ Years Time Zone: Europe (Remote) Key Skills: Languages/Cloud: Python, AWS Lambda, AWS Bedrock Prompt Engineering: Advanced techniques (few-shot, zero-shot, CoT) Generative AI: GPT, BERT, Hugging Face, AWS Bedrock Agentic AI & RL: Basic frameworks, OpenAI Gym Responsible AI: SHAP, LIME, fairness & privacy practices Deployment & MLOps: AWS Lambda/SageMaker, MLflow, model versioning Frameworks: PyTorch, TensorFlow (basic level) Data & CI/CD: EDA, feature engineering, Docker, Git workflows Responsibilities: Design optimized prompts for LLMs Build and deploy generative AI apps Work on RL-based prototypes and ethical AI practices Deploy using AWS services and support MLOps pipelines
Posted 22 hours ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
At Airtel , we’re not just scaling connectivity—we’re redefining how India experiences digital services. With 400M+ customers across telecom, financial services, and entertainment, our impact is massive. But behind every experience is an opportunity to make it smarter . We're looking for a Product Manager – AI to drive next-gen intelligence for our customers and business. AI is a transformational technology and we are looking or skilled product managers who will work on leveraging AI to power everything from our digital platforms to customer experience. You’ll work at the intersection of machine learning, product design, and systems thinking to deliver AI-driven products that create tangible business impact—fast. What You’ll Do Lead and contribute to AI-Powered Product Strategy Define product vision and strategy for AI-led initiatives that enhance productivity, automate decisions, and personalise user interactions across Airtel platforms. Translate Business Problems into AI Opportunities Partner with operations, engineering, and data science to surface high-leverage AI use cases across workforce management, customer experience, and process automation. Build & Scale ML-Driven Products Define data product requirements, work closely with ML engineers to develop models, and integrate intelligent workflows that continuously learn and adapt. Own Product Execution End-to-End Drive roadmaps, lead cross-functional teams, launch MVPs, iterate based on real-world feedback, and scale solutions with measurable ROI. What You Need to be Successful Influential Communication - Craft clarity from complexity. You can tailor messages for execs, engineers, and field teams alike—translating AI into business value. Strategic Prioritisation - Balance business urgency with technical feasibility. You can decide what not to build, and defend those decisions with data and a narrative Systems Thinking - You can sees the big picture —how decisions in one area ripple across the business, tech stack, and user experience. High Ownership & Accountability - Operate with a founder mindset. You don't wait for direction — you can rally teams, removes blockers, deal with tough stakeholders and drives outcomes. Adaptability - You thrive in ambiguity and pivot quickly without losing sight of long-term vision—key in fast-moving digital organizations. Skills You'll Need AI / ML Fundamentals Understanding of ML model types: Supervised, unsupervised, reinforcement learning Common algorithms: Linear/logistic regression, decision trees, clustering, neural networks Model lifecycle: Training, validation, testing, tuning, deployment, monitoring Understanding of LLMs, transformers, diffusion models, vector search, etc. Familiarity with GenAI product architecture: Retrieval-Augmented Generation (RAG), prompt tuning, fine-tuning Awareness of real-time personalization, recommendation systems, ranking algorithms, etc Data Fluency Understanding Data pipelines Working knowledge of SQL and Python for analysis Understanding of data annotation, labeling, and versioning Ability to define data requirements and assess data readiness AI Product Development Defining ML problem scope: Classification vs. regression vs. ranking vs. generation Model evaluation metrics: Precision, recall, etc. A/B testing & online experimentation for ML-driven experiences ML Infrastructure Awareness Know what it takes to make things work and happen. Model deployment techniques: Batch vs real-time inference, APIs, model serving Monitoring & drift detection: How to ensure models continue performing over time Familiarity with ML platforms/tools: TensorFlow, PyTorch, Hugging Face, Vertex AI, SageMaker, etc. (at a product level) Understanding latency, cost, and resource implications of ML choices AI Ethics & Safety We care deeply about our customers, their privacy and compliance to regulation. Understand Bias and fairness in models: How to detect and mitigate them Explainability & transparency: Importance for user trust and regulation Privacy & security: Understanding implications of sensitive or PII data in AI Alignment and guardrails in generative AI systems Preferred Qualifications Experienced Machine Learning/Artificial Intelligence PMs Experience building 0-1 products, scaled platforms/ecosystem products, or ecommerce Bachelor's degree in Computer Science, Engineering, Information Systems, Analytics, Mathematics Masters degree in Business Why Airtel Digital? Massive Scale : Your products will impact 400M+ users across sectors Real-World Relevance : Solve meaningful problems for our customers — protecting our customers, spam & fraud prevention, personalised experiences, connecting homes. Agility Meets Ambition : Work like a startup with the resources of a telecom giant AI That Ships : We don’t just run experiments. We deploy models and measure real-world outcomes Leadership Access : Collaborate closely with CXOs and gain mentorship from India’s top product and tech leaders
Posted 22 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana
On-site
Job Information Date Opened 07/31/2025 Job Type Full time Work Experience 5+ years Industry Technology City Hyderabad State/Province Telangana Country India Zip/Postal Code 500032 Job Description About Kanerika: Who we are: Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Awards and Recognitions Kanerika has won several awards over the years, including: CMMI Level 3 Appraised in 2024. Best Place to Work 2022 & 2023 by Great Place to Work®. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA today. NASSCOM Emerge 50 Award in 2014. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture. Recognized for ISO 27701, 27001, SOC2, and GDPR compliances. Featured as Top Data Analytics Services Provider by GoodFirms. Working for us Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees. Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika. Locations We are located in Austin (USA), Singapore, Hyderabad, Indore and Ahmedabad (India). Job Location: Hyderabad, Indore and Ahmedabad (India) Requirements Key Responsibilities: Lead the design and development of AI-driven applications, particularly focusing on RAG-based chatbot solutions. Architect robust solutions leveraging Python and Java to ensure scalability, reliability, and maintainability. Deploy, manage, and scale AI applications using AWS cloud infrastructure, optimizing performance and resource utilization. Collaborate closely with cross-functional teams to understand requirements, define project scopes, and deliver solutions effectively. Mentor team members, providing guidance on best practices in software development, AI methodologies, and cloud deployments. Ensure solutions meet quality standards, including thorough testing, debugging, performance tuning, and documentation. Continuously research emerging AI technologies and methodologies to incorporate best practices and innovation into our products. - Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, Mathematics, Statistics or related fields. At least 5 years of professional experience in AI/Machine Learning engineering. Strong programming skills in Python and Java. Demonstrated hands-on experience building Retrieval-Augmented Generation (RAG)-based chatbots or similar generative AI applications. Proficiency in cloud platforms, particularly AWS, including experience with EC2, Lambda, SageMaker, DynamoDB, CloudWatch, and API Gateway. Solid understanding of AI methodologies, including natural language processing (NLP), vector databases, embedding models, and large language model integrations. Experience with leading projects or teams, managing technical deliverables, and ensuring high-quality outcomes. AWS certifications (e.g., AWS Solutions Architect, AWS Machine Learning Specialty). Familiarity with popular AI/ML frameworks and libraries such as Hugging Face Transformers, TensorFlow, PyTorch, LangChain, or similar. Experience in Agile development methodologies. Excellent communication skills, capable of conveying complex technical concepts clearly and effectively. Strong analytical and problem-solving capabilities, with the ability to navigate ambiguous technical challenges. Benefits Employee Benefits 1. Culture: a. Open Door Policy: Encourages open communication and accessibility to management. b. Open Office Floor Plan: Fosters a collaborative and interactive work environment. c. Flexible Working Hours: Allows employees to have flexibility in their work schedules. d. Employee Referral Bonus: Rewards employees for referring qualified candidates. e. Appraisal Process Twice a Year: Provides regular performance evaluations and feedback. 2. Inclusivity and Diversity: a. Hiring practices that promote diversity: Ensures a diverse and inclusive workforce. b. Mandatory POSH training: Promotes a safe and respectful work environment. 3. Health Insurance and Wellness Benefits: a. GMC and Term Insurance: Offers medical coverage and financial protection. b. Health Insurance: Provides coverage for medical expenses. c. Disability Insurance: Offers financial support in case of disability. 4. Child Care & Parental Leave Benefits: a. Company-sponsored family events: Creates opportunities for employees and their families to bond. b. Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child. c. Family Medical Leave: Offers leave for employees to take care of family members' medical needs. 5. Perks and Time-Off Benefits: a. Company-sponsored outings: Organizes recreational activities for employees. b. Gratuity: Provides a monetary benefit as a token of appreciation. c. Provident Fund: Helps employees save for retirement. d. Generous PTO: Offers more than the industry standard for paid time off. e. Paid sick days: Allows employees to take paid time off when they are unwell. f. Paid holidays: Gives employees paid time off for designated holidays. g. Bereavement Leave: Provides time off for employees to grieve the loss of a loved one. 6. Professional Development Benefits: a. L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development. b. Mentorship Program: Offers guidance and support from experienced professionals. c. Job Training: Provides training to enhance job-related skills. d. Professional Certification Reimbursements: Assists employees in obtaining professional certifications. e. Promote from Within: Encourages internal growth and advancement opportunities.
Posted 23 hours ago
2.0 - 6.0 years
0 Lacs
guwahati, assam
On-site
You are an experienced Software Engineer specializing in Machine Learning with at least 2+ years of relevant experience. In this role, you will be responsible for designing, developing, and optimizing machine learning solutions and data systems. Your proven track record in implementing ML models, building scalable systems, and collaborating with cross-functional teams will be essential in solving complex challenges using data-driven approaches. As a Software Engineer - Machine Learning, your primary responsibilities will include designing and implementing end-to-end machine learning solutions, building and optimizing scalable data pipelines, collaborating with data scientists and product teams, monitoring and optimizing deployed models, staying updated with the latest trends in machine learning, debugging complex issues related to ML systems, and documenting processes for knowledge sharing and clarity. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or related fields. Your technical skills should include a strong proficiency in Python and machine learning libraries such as TensorFlow, PyTorch, or scikit-learn, experience with data processing tools like Pandas, NumPy, and Spark, proficiency in SQL and database systems, hands-on experience with cloud platforms (AWS, GCP, Azure), familiarity with CI/CD pipelines and Git, and experience with model deployment frameworks like Flask, FastAPI, or Docker. Additionally, you should possess strong analytical skills, leadership abilities to guide junior team members, and a proactive approach to learning and collaboration. Preferred qualifications include experience with MLOps tools like MLflow, Kubeflow, or SageMaker, knowledge of big data technologies such as Hadoop, Spark, or Kafka, familiarity with advanced ML techniques like NLP, computer vision, or reinforcement learning, and experience in designing and managing streaming data workflows. Key Performance Indicators for this role include successfully delivering optimized and scalable ML solutions within deadlines, maintaining high model performance in production environments, and ensuring seamless integration of ML models with business applications. Join us in this exciting opportunity to drive innovation and make a significant impact in the field of Machine Learning.,
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for contributing to the development and deployment of machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms as a part of a larger team. Contributes to translating application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Contributes to developing ways to use machine learning to solve problems and discover new products, working on a portion of the problem and collaborating with more senior researchers as needed. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers, and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to support dynamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and exploration of emerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 3+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts Certifications in cloud architecture, ML engineering, or data science specialization Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
kolkata, west bengal
On-site
As a Senior Machine Learning Engineer with over 10 years of experience, you will play a crucial role in designing, building, and deploying scalable machine learning systems in production. In this role, you will collaborate closely with data scientists to operationalize models, take ownership of ML pipelines from end to end, and enhance the reliability, automation, and performance of our ML infrastructure. Your primary responsibilities will include designing and constructing robust ML pipelines and services for training, validation, and model deployment. You will work in collaboration with various stakeholders such as data scientists, solution architects, and DevOps engineers to ensure alignment with project goals and requirements. Additionally, you will be responsible for ensuring cloud integration compatibility with AWS and Azure, building reusable infrastructure components following best practices in DevOps and MLOps, and adhering to security standards and regulatory compliance. To excel in this role, you should possess strong programming skills in Python, have deep experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn, and be proficient in MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience in deploying models using Docker and Kubernetes, familiarity with cloud platforms and ML services, and proficiency in data engineering tools are essential for success in this position. Additionally, knowledge of CI/CD, version control, and infrastructure as code along with experience in monitoring/logging tools will be advantageous. Good-to-have skills include experience with feature stores and experiment tracking platforms, knowledge of edge/embedded ML, model quantization, and optimization, as well as familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases and experience in leading cross-functional initiatives or mentoring junior engineers will also be beneficial. Joining Ericsson will provide you with an exceptional opportunity to leverage your skills and creativity to address some of the world's toughest challenges. You will be part of a diverse team of innovators who are committed to pushing the boundaries of innovation and crafting groundbreaking solutions. As a member of this team, you will be challenged to think beyond conventional limits and contribute to shaping the future of technology.,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
jaipur, rajasthan
On-site
As an AI / ML Engineer, you will be responsible for utilizing your expertise in the field of Artificial Intelligence and Machine Learning to develop innovative solutions. You should hold a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, AI/ML, Mathematics, or a related field. With a minimum of 6 years of experience in AI/ML, you are expected to demonstrate proficiency in Python and various ML libraries such as scikit-learn, XGBoost, pandas, NumPy, matplotlib, and seaborn. In this role, you will need a strong understanding of machine learning algorithms and deep learning architectures including CNNs, RNNs, and Transformers. Hands-on experience with TensorFlow, PyTorch, or Keras is essential. You should also have expertise in data preprocessing, feature selection, exploratory data analysis (EDA), and model interpretability. Additionally, familiarity with API development and deploying models using frameworks like Flask, FastAPI, or similar tools is required. Experience with MLOps tools such as MLflow, Kubeflow, DVC, and Airflow will be beneficial. Knowledge of cloud platforms like AWS (SageMaker, S3, Lambda), GCP (Vertex AI), or Azure ML is preferred. Proficiency in version control using Git, CI/CD processes, and containerization with Docker is essential for this role. Bonus skills that would be advantageous include familiarity with NLP frameworks (e.g., spaCy, NLTK, Hugging Face Transformers), Computer Vision experience using OpenCV or YOLO/Detectron, and knowledge of Reinforcement Learning or Generative AI (GANs, LLMs). Experience with vector databases such as Pinecone or Weaviate, as well as LangChain for AI agent building, is a plus. Familiarity with data labeling platforms and annotation workflows will also be beneficial. In addition to technical skills, you should possess soft skills such as an analytical mindset, strong problem-solving abilities, effective communication, and collaboration skills. The ability to work independently in a fast-paced, agile environment is crucial. A passion for AI/ML and a proactive approach to staying updated with the latest developments in the field are highly desirable for this role.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be working as an AI Platform Engineer in Bangalore as part of the GenAI COE Team. Your key responsibilities will involve developing and promoting scalable AI platforms for customer-facing applications. It will be essential to evangelize the platform with customers and internal stakeholders, ensuring scalability, reliability, and performance to meet business needs. Your role will also entail designing machine learning pipelines for experiment management, model management, feature management, and model retraining. Implementing A/B testing of models and designing APIs for model inferencing at scale will be crucial. You should have proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. As an AI Platform Engineer, you will serve as a subject matter expert in LLM serving paradigms, with in-depth knowledge of GPU architectures. Expertise in distributed training and serving of large language models, along with proficiency in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM, will be required. Demonstrating proven expertise in model fine-tuning and optimization techniques to achieve better latencies and accuracies in model results will be part of your responsibilities. Reducing training and resource requirements for fine-tuning LLM and LVM models will also be essential. Having extensive knowledge of different LLM models and providing insights on their applicability based on use cases is crucial. You should have proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. Your proficiency in DevOps and LLMOps practices, along with knowledge of Kubernetes, Docker, and container orchestration, will be necessary. A deep understanding of LLM orchestration frameworks such as Flowise, Langflow, and Langgraph is also required. In terms of skills, you should be familiar with LLM models like Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, and Llama, as well as LLM Ops tools like ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, and Azure AI. Additionally, knowledge of databases/data warehouse systems like DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, and Google BigQuery, as well as cloud platforms such as AWS, Azure, and GCP, is essential. Proficiency in DevOps tools like Kubernetes, Docker, FluentD, Kibana, Grafana, and Prometheus, along with cloud certifications like AWS Professional Solution Architect and Azure Solutions Architect Expert, will be beneficial. Strong programming skills in Python, SQL, and Javascript are required for this full-time role, with an in-person work location.,
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for developing and deploying machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms. Translates application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Develops novel ways to use machine learning to solve problems and discover new products. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as resource for colleagues with less experience. Job Description About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to supportnamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and explorationemerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 5+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts. Certifications in cloud architecture, ML engineering, or data science specializations, confluence pages, white papers, presentations, test results, technical manuals, formal recommendations and reports. Contributes to the company by creating patents, Application Programming Interfaces (APIs). Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.
If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:
The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the sagemaker field, a typical career progression may look like this:
In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:
Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:
What is a SageMaker notebook instance?
Medium:
What is the difference between SageMaker Ground Truth and SageMaker Processing?
Advanced:
As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough