Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
India
Remote
What You'll Do Avalara is an AI-first company. We expect every engineer, manager, and leader to actively leverage AI to enhance productivity, quality, innovation, and customer value. AI is embedded in our workflows, decision-making, and products — and success at Avalara requires embracing AI as an essential capability, not an optional tool. We are looking for accomplished Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping, development, and delivery of the LLM platform features. You will build core agent infrastructure—A2A orchestration and MCP-driven tool discovery—so teams can launch secure, scalable agent workflows. You will be reporting to Senior Manager, ML Engineering. What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Drive innovation by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards, contributing to the project's technical expertise. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, promoting a culture of collaboration and Engineering expertise. Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You’ll Need To Be Successful 6+ years of experience building Machine Learning models and deploying them in production environments as part of creating solutions to complex customer problems. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Demonstrated experience staying current with breakthroughs in AI/ML, with a focus on GenAI. Experience with design patterns and data structures. Technologies You Will Work With Python, LLMs, Agents, A2A, MCP, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, and Grafana We are the AI & ML enablement group in Avalara. We empower Avalara's Product and Engineering teams with the latest AI & ML capabilities, driving easy-to-use, automated compliance solutions that position Avalara as the industry AI technology leader and the go-to choice for all compliance needs. This is a remote position. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 1 day ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Attri) What do you need for this opportunity? Must have skills required: Azure, Docker, TensorFlow, Python, Shell Scripting Attri is Looking for: About Attri Attri is an AI organization that helps businesses initiate and accelerate their AI efforts. We offer the industry’s first end-to-end enterprise machine learning platform, empowering teams to focus on ML development rather than infrastructure. From ideation to execution, our global team of AI experts supports organizations in building scalable, state-of-the-art ML solutions. Our mission is to redefine businesses by harnessing cutting-edge technology and a unique, value-driven approach. With team members across continents, we celebrate diversity, curiosity, and innovation. We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary 💸 Support for continual learning (free books and online courses) 📚 Leveling Up Opportunities 🌱 Diverse team environment 🌍 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Senior DevOps Engineer Experience: 4 - 7 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Onsite (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills: Azure OR Docker, TensorFlow, Python OR Shell Scripting Attri (One of Uplers' Clients) is Looking for: Senior DevOps Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
7.0 - 10.0 years
0 Lacs
Chandigarh
On-site
bebo Technologies is a leading complete software solution provider. bebo stands for 'be extension be offshore'. We are a business partner of QASource, inc. USA[www.QASource.com]. We offer outstanding services in the areas of software development, sustenance engineering, quality assurance and product support. bebo is dedicated to provide high-caliber offshore software services and solutions. Our goal is to 'Deliver in time-every time'. For more details visit our website: www.bebotechnologies.com Let's have a 360 tour of our bebo premises by clicking on below link: https://www.youtube.com/watch?v=S1Bgm07dPmMKey Required Skills: Bachelor's or Master’s degree in Computer Science, Data Science, or related field. 7–10 years of industry experience, with at least 5 years in machine learning roles. Advanced proficiency in Python and common ML libraries: TensorFlow, PyTorch, Scikit-learn. Experience with distributed training, model optimization (quantization, pruning), and inference at scale. Hands-on experience with cloud ML platforms: AWS (SageMaker), GCP (Vertex AI), or Azure ML. Familiarity with MLOps tooling: MLflow, TFX, Airflow, or Kubeflow; and data engineering frameworks like Spark, dbt, or Apache Beam. Strong grasp of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay). Excellent problem-solving, communication, and documentation skills.
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Chandigarh
On-site
3–5 years of experience in software engineering and ML development. Strong proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, or PyTorch. Experience building and evaluating models, along with data preprocessing and feature engineering. Proficiency in REST APIs, Docker, Git, and CI/CD tools. Solid foundation in software engineering principles, including data structures, algorithms, and design patterns. Hands-on experience with MLOps platforms (e.g., MLflow, TFX, Airflow, Kubeflow). Exposure to NLP, large language models (LLMs), or computer vision projects. Experience with cloud platforms (AWS, GCP, Azure) and managed ML services. Contributions to open-source ML libraries or participation in ML competitions (e.g., Kaggle, DrivenData) is a plus. Job Type: Full-time Benefits: Internet reimbursement Paid time off Provident Fund Application Question(s): What is your expected CTC? What is your notice period? Experience: Machine learning: 3 years (Preferred) Work Location: In person
Posted 1 day ago
3.0 - 6.0 years
5 - 15 Lacs
Hyderābād
Remote
Job Title: Data Scientist – Python Experience: 3 to 6 Years Location: Remote Job Type: Full-Time Education: B.E./B.Tech or M.Tech in Computer Science, Data Science, Statistics, or a related field Job Summary We are seeking a talented and results-driven Data Scientist with 3–6 years of experience in Python-based data science workflows. This is a remote, full-time opportunity for professionals who are passionate about solving real-world problems using data and statistical modeling. The ideal candidate should be highly proficient in Python and have hands-on experience with data exploration, machine learning, model deployment, and working with large datasets. Key Responsibilities Analyze large volumes of structured and unstructured data to generate actionable insights Design, develop, and deploy machine learning models using Python and related libraries Collaborate with cross-functional teams including product, engineering, and business to define data-driven solutions Develop data pipelines and ensure data quality, consistency, and reliability Create and maintain documentation for methodologies, code, and processes Communicate findings and model results clearly to technical and non-technical stakeholders Continuously research and implement new tools, techniques, and best practices in data science Required Skills & Qualifications 3–6 years of experience in a data science role using Python Proficiency in Python data science libraries (Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn) Strong statistical analysis and modeling skills Experience with machine learning algorithms (classification, regression, clustering, etc.) Familiarity with model evaluation, tuning, and deployment techniques Hands-on experience with SQL and working with large databases Exposure to cloud platforms (AWS, GCP, or Azure) is a plus Experience with version control (Git), Jupyter notebooks, and collaborative data tools Preferred Qualifications Advanced degree (Master’s preferred) in Computer Science, Data Science, Statistics, or a related discipline Experience with deep learning frameworks like TensorFlow or PyTorch is a plus Familiarity with MLOps tools such as MLflow, Airflow, or Docker Experience in remote team collaboration and agile project environments What We Offer 100% remote work with flexible hours Competitive compensation package Access to cutting-edge tools and real-world projects A collaborative and inclusive work culture Opportunities for continuous learning and professional development Job Type: Full-time Pay: ₹500,000.00 - ₹1,500,000.00 per year Schedule: Day shift Monday to Friday Morning shift Work Location: In person
Posted 1 day ago
10.0 years
8 - 10 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Senior Software Engineer-MLOps We are looking for a highly skilled Senior Software Engineer – MLOps with deep expertise in building and managing production-grade ML pipelines in AWS and Azure cloud environments. This role requires a strong foundation in software engineering, DevOps principles, and ML model lifecycle automation to enable reliable and scalable machine learning operations across the organization Key Responsibilities include: Design and build robust MLOps pipelines for model training, validation, deployment, and monitoring Automate workflows using CI/CD tools such as GitLab Actions, Azure DevOps, Jenkins, or Argo Workflows Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Design secure and cost-efficient ML architecture leveraging cloud-native services Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Implement cost optimization and performance tuning for cloud workloads Package ML models using Docker, and orchestrate deployments with Kubernetes on EKS/AKS Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Integrate observability tools for model performance, drift detection, and lineage tracking (e.g., Fiddler, MLflow, Prometheus, Grafana, Azure Monitor, CloudWatch) Ensure model reproducibility, versioning, and compliance with audit and regulatory requirements Collaborate with data scientists, software engineers, DevOps, and cloud architects to operationalize AI/ML use cases Mentor junior MLOps engineers and evangelize MLOps best practices across teams Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 10 years in Devops, with 2+ years in MLOps. Proficient with MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Experience with feature stores (e.g., Feast), model registries, and experiment tracking. Proficiency in Devops & MLOps, Automation Cloud formation/Teraform/BICEP Requisition ID: 610750 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 1 day ago
7.0 years
8 - 10 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities include: Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
As a Senior Data and Applied Scientist, you will work with Pattern's Data Science team to curate and analyze data and apply machine learning models and statistical techniques to optimize advertising spend on ecommerce platforms. What You’ll Do Design, build, and maintain machine learning and statistical models to optimize advertising campaigns to improve search visibility and conversion rates on ecommerce platforms. Continuously optimize the quality of our machine learning models, especially for key metrics like search ranking, keyword bidding, CTR and conversion rate estimation Conduct research to integrate new data sources, innovate in feature engineering, fine-tuning algorithms, and enhance data pipelines for robust model performance. Analyze large datasets to extract actionable insights that guide advertising decisions. Work closely with teams across different regions (US and India), ensuring seamless collaboration and knowledge sharing. Dedicate 20% of time to MLOps for efficient, reliable model deployment and operations. What We’re Looking For Bachelor's or Master's in Data Science, Computer Science, Statistics, or a related field. 3-6 years of industry experience in building and deploying machine learning solutions. Strong data manipulation and programming skills in Python and SQL and hands-on experience with libraries such as Pandas, Numpy, Scikit-Learn, XGBoost. Strong problem-solving skills and an ability to analyze complex data. In depth expertise in a range of machine learning and statistical techniques such as linear and tree-based models along with understanding of model evaluation metrics. Experience with Git, AWS, Docker, and MLFlow is advantageous. Additional Pluses Portfolio: An active Kaggle or Github profile showcasing relevant projects. Domain Knowledge: Familiarity with advertising and ecommerce concepts, which would help in tailoring models to business needs. Pattern is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 1 day ago
3.0 years
4 - 7 Lacs
Pitampura
On-site
Job Title : Python Developer (Deep Learning / AI) Job Summary : We are seeking a highly skilled and experienced Python Developer with a strong background in Deep Learning to join our dynamic AI/ML team. The ideal candidate will be instrumental in designing, developing, and deploying advanced deep learning models, with a particular focus on face recognition systems and other computer vision applications. You will work on the entire ML lifecycle, from data acquisition and model training to deployment and optimization, contributing directly to our core AI initiatives. Key Responsibilities : ● Design, develop, and implement robust and scalable deep learning models for face recognition, object detection, and other computer vision tasks. ● Develop and maintain high-quality Python code for data preprocessing, model training, evaluation, and deployment. ● Collaborate with data scientists, AI researchers, and software engineers to integrate deep learning solutions into existing and new products. ● Perform extensive data analysis, feature engineering, and data augmentation to prepare datasets for deep learning models. ● Evaluate, optimize, and fine-tune deep learning models for performance, accuracy, and efficiency in production environments. ● Stay up-to-date with the latest advancements in deep learning, computer vision, and AI research, and apply relevant techniques to projects. ● Implement and adhere to best practices for MLOps, including version control, continuous integration/deployment (CI/CD) for ML models, and model monitoring. ● Contribute to the entire machine learning lifecycle, from problem definition and data exploration to model deployment and maintenance. ● Ensure the ethical and responsible development of AI systems, with a focus on bias detection and mitigation, and data privacy (e.g., GDPR compliance). Qualifications : Education : ● Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Electrical Engineering, or a related quantitative field. A Ph.D. is a plus. Experience : ● 3+ years of professional experience as a Python Developer, with a significant focus on deep learning projects. ● Proven experience in developing and deploying computer vision applications, specifically face recognition systems. Technical Skills : ● Expert-level proficiency in Python and its ecosystem for AI/ML (NumPy, Pandas, Matplotlib, Seaborn). ● Strong expertise in Deep Learning frameworks: TensorFlow and/or PyTorch (including Keras). ● In-depth knowledge of Convolutional Neural Networks (CNNs) and other relevant neural network architectures (e.g., RNNs, GANs, Transformers). ● Solid understanding of Computer Vision fundamentals and practical experience with libraries like OpenCV. ● Experience with face detection, face alignment, feature embedding (e.g., FaceNet), and face matching algorithms. ● Familiarity with liveness detection techniques is highly desirable. ● Proficiency in data manipulation, preprocessing, and feature engineering for large datasets. ● Strong understanding of core mathematical concepts: linear algebra, calculus, probability, and statistics. ● Experience with cloud platforms (AWS, Azure, GCP) for training and deploying ML models. ● Familiarity with MLOps tools and practices (e.g., Docker, MLflow, Kubernetes). ● Experience with version control systems (Git). Soft Skills ● Excellent problem-solving and analytical abilities. ● Strong communication and collaboration skills, with the ability to explain complex technical concepts to diverse audiences. ● High degree of curiosity and a passion for continuous learning in the rapidly evolving AI field. ● Proactive, adaptable, and able to work effectively in a fast-paced, team-oriented environment. ● Commitment to ethical AI development and data privacy. Bonus Points : ● Experience with other programming languages like C++ or Java for high-performance computing. ● Contributions to open-source AI/ML projects or relevant publications. ● Experience with distributed computing for large-scale model training. ● Knowledge of specialized hardware for AI (GPUs, TPUs). Interested candidates can forward their resume at hr@axepertexhibits.com or call at 9211659314. Job Type: Full-time Pay: ₹35,000.00 - ₹60,000.00 per month Schedule: Day shift Supplemental Pay: Overtime pay Work Location: In person
Posted 1 day ago
4.0 years
6 - 11 Lacs
Vadodara
On-site
What You’ll Do: Build and deploy models for object detection , OCR , image classification , face recognition , and more. Develop real-time CV pipelines using YOLOv5/v8, Faster R-CNN, DeepSORT , etc. Integrate Generative AI and LLMs into CV workflows and end-user applications. Optimize models for edge devices (Jetson Nano/Xavier, Coral, OpenVINO). Work with annotated image/video datasets, apply augmentation and pre/post-processing. Lead cross-functional AI initiatives and guide team-level technical decisions. Must-Haves: 4+ years in AI/ML with practical deployments. Proficiency in Python , PyTorch/TensorFlow , and OpenCV . Hands-on with OCR tools : Tesseract, PaddleOCR, EasyOCR, etc. Solid understanding of image processing: contouring, segmentation, thresholding. Experience deploying models on edge: NVIDIA Jetson , TensorRT , etc. Exposure to LLMs and building end-to-end AI pipelines. Nice-to-Haves: Experience with AWS Textract , Azure OCR , or Google Vision API . Built tools for NER , invoice parsing , or document classification . Familiarity with DeepStream , Streamlit dashboards , and MLflow/ONNX . Academic research/published work in AI/CV fields. Knowledge of CUDA , sensors, and camera hardware integration. Why Join Us? Work on cutting-edge AI + CV projects with real-world impact. Be part of a team that values innovation, ownership, and experimentation . Opportunity to lead GenAI initiatives across domains. Job Type: Full-time Pay: ₹600,000.00 - ₹1,100,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Work Location: In person
Posted 1 day ago
0 years
0 Lacs
India
Remote
Hi, We have a 6-month (possibilities of further extension) Contract requirement for AI Developer OFFSHORE REMOTE Europe time zone -Central European Time (CET). Working hours will be 5 am/ 5:30am/6 am to 2 pm/2:30 pm / 3 pm IST Position: AI Developer Duration: 6-month (possibilities of further extension) Annual Salary: DOE Work Location: REMOTE Required Experience: 5 yrs Primary Skills Knowledge of Python AWS Lambda AWS Bedrock Detailed JD as follows: 1. Prompt Engineering Techniques – Advanced Design effective prompts for large language models to optimize accuracy, reduce hallucinations, and guide reasoning using few-shot, zero-shot, and chain-of-thought approaches. Work with foundation models via APIs (e.g., OpenAI, AWS Bedrock) to build real-world applications across domains. 2. Generative Modeling Techniques – Intermediate Implement and experiment with generative models like GPT, BERT, or other Transformer-based architectures. Apply pre-trained models in NLP, content generation, or summarization tasks using platforms like Hugging Face or AWS Bedrock. 3. Agentic AI and Reinforcement Learning – Intermediate Understand and integrate basic agentic frameworks that combine reasoning, memory, and planning. Work with simulation environments (e.g., OpenAI Gym) and apply standard reinforcement learning algorithms for prototype development. 4. Building Responsible and Ethical AI – Intermediate Implement fairness-aware model evaluation and explainability practices using tools like SHAP or LIME. Follow best practices for privacy, bias mitigation, and governance in AI systems. 5. Cloud Deployment and MLOps – Intermediate Deploy AI models using AWS Lambda , AWS Bedrock , and SageMaker for serving and scalability. Participate in building MLOps pipelines and model versioning using MLflow or similar tools. 6. Machine Learning Frameworks (PyTorch/TensorFlow) – Beginner Train and fine-tune small-scale models using PyTorch or TensorFlow under guidance or for prototyping. Leverage existing models via transfer learning for quick deployment. 7. Data Preprocessing and Feature Engineering – Beginner Clean and transform raw data for modeling tasks. Perform exploratory data analysis (EDA) and develop basic feature extraction pipelines. 8. Containerization and CI/CD – Beginner Learn and assist in containerizing ML applications using Docker. Support implementation of CI/CD pipelines and Git-based workflow integrations.
Posted 1 day ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Join us and drive the design and deployment of AI/ML frameworks revolutionizing telecom services. As a key member of our team, you will architect and build scalable, secure AI systems for service assurance, orchestration, and fulfillment, working directly with network experts to drive business impact. You will be responsible for defining architecture blueprints, selecting the right tools and platforms, and guiding cross-functional teams to deliver scalable AI systems. This role offers significant growth potential, mentorship opportunities, and the chance to shape the future of telecoms using the latest AI technologies and platforms. Key Responsibilities HOW YOU WILL CONTRIBUTE AND WHAT YOU WILL LEARN Design end-to-end AI architecture tailored to telecom services business functions (e.g., Service assurance, Orchestration and Fulfilment). Define data strategy and AI workflows including Inventory Model, ETL, model training, deployment, and monitoring. Evaluate and select AI platforms, tools, and frameworks suited for telecom-scale workloads for development and testing of Inventory services solutions Work closely with telecom network experts and Architects to align AI initiatives with business goals. Ensure scalability, performance, and security in AI systems across hybrid/multi-cloud environments. Mentor AI developers Key Skills And Experience You have: 10+ years' experience in AI/ML design and deployment with a Graduation or equivalent degree. Practical Experience on AI/ML techniques and scalable architecture design for telecom operations, inventory management, and ETL. Exposure to data platforms (Kafka, Spark, Hadoop), model orchestration (Kubeflow, MLflow), and cloud-native deployment (AWS Sagemaker, Azure ML). Proficient in programming (Python, Java) and DevOps/MLOps best practices. It will be nice if you had: Worked with any of the LLM models (llama family) and LLM agent frameworks like LangChain / CrewAI / AutoGen Familiarity with telecom protocols, OSS/BSS platforms, 5G architecture, and NFV/SDN concepts. Excellent communication and stakeholder management skills. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a strategic, results-oriented, and hands-on VP of AI and Automation to lead our intelligent automation initiatives. Drive strategy to leverage AI for transformative business process automation. You will lead a world-class team to design, build, and deploy sophisticated AI and automation solutions that drive efficiency, reduce costs, and create a significant competitive advantage. What You'll Do Lead & Inspire: Manage, mentor, and grow a high-performing team of AI and automation engineers, fostering a culture of innovation, collaboration, and execution. Architect the Future: Design and oversee the implementation of our core AI and automation platforms, including sophisticated Agentic Architectures that enable complex, autonomous workflows. Drive LLM Strategy: Lead the strategic selection, tuning, optimization, and testing of Large Language Models (LLMs). This includes establishing best practices for advanced Prompt Engineering and building robust Retrieval-Augmented Generation (RAG) pipelines to ensure our models are accurate and relevant. Operational Excellence: Champion a robust MLOps culture, implementing best-in-class processes for model deployment, monitoring, and lifecycle management. Secure Our AI: Define and enforce state-of-the-art LLM Security protocols to protect against adversarial attacks, ensure data privacy, and maintain model integrity. Scale with the Cloud: Own the Cloud Architecture strategy for all AI and automation workloads, ensuring our infrastructure is scalable, cost-effective, and resilient on platforms like AWS, GCP, or Azure. What You'll Need 10+ years of experience in software engineering, with at least 5+ years in a hands-on senior leadership role managing AI, ML, or automation-focused teams. Deep, hands-on expertise in the modern AI stack, including LLMs , RAG systems , and vector databases. Demonstrated experience designing and building complex systems using Agentic Architecture and multi-agent frameworks. Expert-level understanding of MLOps principles and tools (e.g., Kubeflow, MLflow, Seldon Core). Strong background in designing and managing scalable Cloud Architecture for AI applications. In-depth knowledge of LLM Security best practices, including red teaming, output validation, and mitigating common vulnerabilities.
Posted 1 day ago
7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities Include Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 1 day ago
1.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
🌟 We're Hiring: Artificial Intelligence Consultant! 🌟 We're seeking a highly motivated and technically adept Artificial Intelligence Consultant to join our growing Artificial Intelligence and Business Transformation practice. This role is ideal for a strategic thinker with a strong blend of leadership, business consulting acumen, and technical expertise in Python, LLMs, Retrieval-Augmented Generation (RAG), and agentic systems. Location: Remote (Contract to Hire/ 1 Year Renewable) ⏰ Work Mode: Work From Home 💼 Role: Artificial Intelligence Consultant Roles And Responsibilities AI Engagements: Independently manage end-to-end delivery of AI-led transformation projects across industries, ensuring value realization and high client satisfaction. Strategic Consulting & Roadmapping: Identify key enterprise challenges and translate them into AI solution opportunities, crafting transformation roadmaps that leverage RAG, LLMs, and intelligent agent frameworks. LLM/RAG Solution Design & Implementation: Architect and deliver cutting-edge AI systems using Python, LangChain, LlamaIndex, OpenAI function calling, semantic search, and vector store integrations (FAISS, Qdrant, Pinecone, ChromaDB). Agentic Systems: Design and deploy multi-step agent workflows using frameworks like CrewAI, LangGraph, AutoGen or ReAct, optimizing tool-augmented reasoning pipelines. Client Engagement & Advisory: Build lasting client relationships as a trusted AI advisor, delivering technical insight and strategic direction on generative AI initiatives. Hands-on Prototyping: Rapidly prototype PoCs using Python and modern ML/LLM stacks to demonstrate feasibility and business impact. Thought Leadership: Conduct market research, stay updated with the latest in GenAI and RAG/Agentic systems, and contribute to whitepapers, blogs, and new offerings. Essential Skills Education: Bachelor's or Master’s in Computer Science, AI, Engineering, or related field. Experience: Minimum 5 years of experience in consulting or technology roles, with at least 3 years focused on AI & ML solutions. Leadership Quality: Proven track record in leading cross-functional teams and delivering enterprise-grade AI projects with tangible business impact. Business Consulting Mindset: Strong problem-solving, stakeholder communication, and business analysis skills to bridge technical and business domains. Python & AI Proficiency: Advanced proficiency in Python and popular AI/ML libraries (e.g., scikit-learn, PyTorch, TensorFlow, spaCy, NLTK). Solid understanding of NLP, embeddings, semantic search, and transformer models. LLM Ecosystem Fluency: Experience with OpenAI, Cohere, Hugging Face models; prompt engineering; tool/function calling; and structured task orchestration. Independent Contributor: Ability to own initiatives end-to-end, take decisions independently, and operate in fast-paced environments. Preferred Skills Cloud Platform Expertise: Strong familiarity with Microsoft Azure (preferred), AWS, or GCP — including compute instances, storage, managed services, and serverless/cloud-native deployment models. Programming Paradigms: Hands-on experience with both functional and object-oriented programming in AI system design. Hugging Face Ecosystem: Proficiency in using Hugging Face Transformers, Datasets, and Model Hub. Vector Store Experience: Hands-on experience with FAISS, Qdrant, Pinecone, ChromaDB. LangChain Expertise: Strong proficiency in LangChain for agentic task orchestration and RAG pipelines. MLOps & Deployment: CI/CD for ML pipelines, MLOps tools (MLflow, Azure ML), containerization (Docker/Kubernetes). Cloud & Service Architecture: Knowledge of microservices, scaling strategies, inter-service communication. Programming Languages: Proficiency in Python and C# for enterprise-grade AI solution development.
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Greetings ! One our our client TOP MNC Giant looking for Data Scientist Important Notes: Only person who can join immediately or within 7 days ONLY APPLY Base Locations: Gurgaon and Bengaluru (hybrid setup 3 days work from office). Role: Data Scientist Exp: 4 to 8 Years Immediate Joiners Only Skills (must have) Bachelor’s or master’s degree in computer science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL etc. Experience in developing and deploying AI/ML and deep learning solutions with libraries and frameworks, such as Scikit-learn, TensorFlow, PyTorch etc. Experience in ETL and Datawarehouse tools such as Azure Data Factory,Azur e Data Lake or Databricks etc. Knowledge of math, probability, and statistics. Familiarity with a variety of ML algorithms. Good experience in cloud infrastructure such as Azure (Preferred), AWS/GCP Exposure to Gen AI, Vector DB, LLM (Large language Model) Skills (good to have) Experience in Flask/Django, Streamlit is a bonus Experience with MLOps: MLFlow, Kubeflow, CI/CD Pipeline etc. Good to have experience in Docker, Kubernetes etc Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects, AI/ML, NLP/NLU and deep learning solutions. Develop, implement, and deploy AI/ML solutions. Preprocess and analyze large datasets to identify patterns, trends, and insights. Evaluate, validate, and optimize AI/ML models to ensure their accuracy, efficiency, and generalizability. Deploy applications and AI/ML model into cloud environment such as AWS/Azure/GCP etc. Monitor and maintain the performance of AI/ML models in production environments, identifying opportunities for improvement and updating models as needed. Document AI/ML model development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement. INTERESTED CANDIDATES PERFECT MATCH TO THE JD AND WHO CAN JOIN ASAP ONLY DO APPLY ALONG WITH BELOW MENTIONED DETAILS : Total exp : Relevant exp in Data Scientist : Applying for Gurgaon and Bengaluru : Open for Hybrid : Current CTC : Expected CTC : Can join ASAP : Will call you once we receive your updated profile along with above mentioned details. Thanks, Venkat Solti solti.v@anlage.co.in
Posted 1 day ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position - Machine Learning Architect/Evangelist Job Description : We are seeking an experienced Machine Learning Architect to design and implement advanced ML solutions that solve complex business challenges. He/She will lead the development of scalable ML pipelines, leverage cutting-edge tools and frameworks, and ensure seamless deployment and integration in production environments. Responsibilities : Design and build end-to-end ML pipelines, including model training, deployment, and monitoring. Develop scalable ML solutions using supervised, unsupervised, and deep learning techniques. Optimize models with tools like TensorFlow, PyTorch, and Scikit-learn. Implement MLOps practices with tools like MLflow and Kubeflow for CI/CD and model management. Collaborate with data scientists and engineers to integrate ML systems into production. Stay updated on the latest ML advancements and drive innovation. Create prototypes to validate new ideas and document system designs. Mentor junior team members and promote knowledge sharing. Key Skills and Qualifications : Strong expertise in ML algorithms, deep learning frameworks, and statistical modeling. Proficiency in Python or similar languages, and experience with TensorFlow, PyTorch, and Scikit-learn. Hands-on experience with MLOps tools and cloud platforms (AWS, Azure, GCP). Excellent problem-solving and collaboration skills. Experience - 15+ years Location - Pune Work Mode- Hybrid
Posted 1 day ago
2.0 years
0 Lacs
Greater Bengaluru Area
On-site
About the Company 6thStreet.com is an omnichannel fashion & lifestyle destination that offers 1400+ fashion & beauty brands in the UAE, KSA, Kuwait, Oman, Bahrain & Qatar. Customers can shop the latest on-trend outfits, shoes, bags, beauty essentials and accessories from international brands such as Tommy Hilfiger, Calvin Klein, Hugo, Marks & Spencers, Dune London, Charles & Keith, Aldo, Crocs, Birkenstock, Skechers, Levi’s, Nike, Adidas, Loreal and Inglot amongst many more. 6thStreet.com recently opened GCC’s first phygital store at Dubai Hills Mall; an innovative tech-led space which combines the best of both online & offline shopping with online browsing & smart fitting rooms. Overview The ML Engineer will extract insights and build models that will drive key business decisions. The candidate will work closely with other data scientists, software engineers and product managers to design, build, optimize and deploy machine learning systems and solutions. This role is ideal for someone with a strong analytical mindset, a passion for data, and a desire to grow in a fast-paced e-commerce environment. Necessary Skills Python: Proficiency in python, with knowledge of popular libraries like pandas, numpy, scipy, scikit-learn, tensorflow, pytorch SQL: Strong ability to write and optimize complex SQL queries to extract and manipulate large datasets from relational databases Data Analysis & Visualization: Ability to work with large datasets and extract meaningful insights and able to leverage data visualization tools and libraries Data Wrangling & Preprocessing: Expertise in cleaning and transforming raw data into structured formats Statistical Analysis: A solid understanding of descriptive and inferential statistics, including hypothesis testing and probability theory Machine Learning & Deep Learning: Familiarity with supervised and unsupervised learning algorithms such as regression, tree based methods, clustering, boosting and bagging methodologies Machine learning workflows: feature engineering, model training, model optimization , validation and evaluation ML Deployment: Deploying machine learning models to production environments, ensuring they meet the scalability, reliability, and performance requirements DevOps: Git, CI/CD pipelines, dockerization, model versioning (mlflow), monitoring platforms Cloud Platforms: Experience with cloud platforms like AWS, Google Cloud or Azure for deploying models Problem-Solving & Analytical Thinking: Ability to approach complex problems methodically and implement robust solutions Collaboration & Communication: Strong ability to work with cross-functional teams and communicate technical concepts to non-technical stakeholders. Adaptability & Learning: Willingness to quickly learn new tools, technologies, and algorithms Attention to Detail: Ability to carefully test and validate models, ensuring they work as intended in production Good to have: Familiarity with big data technologies such as Spark or Hadoop Object-oriented programming (OOP) Knowledge of data privacy and security practices when working with sensitive data Experience working with big data tools (e.g., Apache Kafka, Apache Flink) for streaming data processing Familiarity with feature stores like Feast Experience working with e-commerce data Responsibilities Design and implement machine learning models, algorithms, and systems Build and maintain end-to-end machine learning pipelines- model training, validation, and deployment Experiment with different algorithms and approaches to optimize model performance Collaborate with software engineers, product managers, analysts to build scalable, production-ready solutions Communicate complex technical concepts to non-technical stakeholders Stay updated with the latest advancements in machine learning and deep learning. Evaluate and experiment with new tools, libraries, and algorithms that could improve model performance Collaborate on proof-of-concept (POC) projects to validate new approaches and techniques Benefits Full-time role Competitive salary Company employee discounts across all brands Medical & health insurance Collaborative work environment Good vibes work culture Qualifications Bachelor's degree or equivalent experience in quantative field (Statistics, Mathematics, Computer Science, Engineering, etc.) At least 2 years' of experience in quantitative analytics or data modeling and development Deep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL)
Posted 1 day ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a visionary AI Architect to lead the design and integration of cutting-edge AI systems, including Generative AI , Large Language Models (LLMs) , multi-agent orchestration , and retrieval-augmented generation (RAG) frameworks. This role demands a strong technical foundation in machine learning, deep learning, and AI infrastructure, along with hands-on experience in building scalable, production-grade AI systems on the cloud. The ideal candidate combines architectural leadership with hands-on proficiency in modern AI frameworks, and can translate complex business goals into innovative, AI-driven technical solutions. Primary Stack & Tools: Languages : Python, SQL, Bash ML/AI Frameworks : PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers GenAI & LLM Tooling : OpenAI APIs, LangChain, LlamaIndex, Cohere, Claude, Azure OpenAI Agentic & Multi-Agent Frameworks : LangGraph, CrewAI, Agno, AutoGen Search & Retrieval : FAISS, Pinecone, Weaviate, Elasticsearch Cloud Platforms : AWS, GCP, Azure (preferred: Vertex AI, SageMaker, Bedrock) MLOps & DevOps : MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines, Terraform, FAST API Data Tools : Snowflake, BigQuery, Spark, Airflow Key Responsibilities: Architect scalable and secure AI systems leveraging LLMs , GenAI , and multi-agent frameworks to support diverse enterprise use cases (e.g., automation, personalization, intelligent search). Design and oversee implementation of retrieval-augmented generation (RAG) pipelines integrating vector databases, LLMs, and proprietary knowledge bases. Build robust agentic workflows using tools like LangGraph , CrewAI , or Agno , enabling autonomous task execution, planning, memory, and tool use. Collaborate with product, engineering, and data teams to translate business requirements into architectural blueprints and technical roadmaps. Define and enforce AI/ML infrastructure best practices , including security, scalability, observability, and model governance. Manage technical road-map, sprint cadence, and 3–5 AI engineers; coach on best practices. Lead AI solution design reviews and ensure alignment with compliance, ethics, and responsible AI standards. Evaluate emerging GenAI & agentic tools; run proofs-of-concept and guide build-vs-buy decisions. Qualifications: 10+ years of experience in AI/ML engineering or data science, with 3+ years in AI architecture or system design. Proven experience designing and deploying LLM-based solutions at scale, including fine-tuning , prompt engineering , and RAG-based systems . Strong understanding of agentic AI design principles , multi-agent orchestration , and tool-augmented LLMs . Proficiency with cloud-native ML/AI services and infrastructure design across AWS, GCP, or Azure. Deep expertise in model lifecycle management, MLOps, and deployment workflows (batch, real-time, streaming). Familiarity with data governance , AI ethics , and security considerations in production-grade systems. Excellent communication and leadership skills, with the ability to influence technical and business stakeholders.
Posted 1 day ago
5.0 years
0 Lacs
India
Remote
Position: AI Developer Experience: 5+ Years Time Zone: Europe (Remote) Key Skills: Languages/Cloud: Python, AWS Lambda, AWS Bedrock Prompt Engineering: Advanced techniques (few-shot, zero-shot, CoT) Generative AI: GPT, BERT, Hugging Face, AWS Bedrock Agentic AI & RL: Basic frameworks, OpenAI Gym Responsible AI: SHAP, LIME, fairness & privacy practices Deployment & MLOps: AWS Lambda/SageMaker, MLflow, model versioning Frameworks: PyTorch, TensorFlow (basic level) Data & CI/CD: EDA, feature engineering, Docker, Git workflows Responsibilities: Design optimized prompts for LLMs Build and deploy generative AI apps Work on RL-based prototypes and ethical AI practices Deploy using AWS services and support MLOps pipelines
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
guwahati, assam
On-site
You are an experienced Software Engineer specializing in Machine Learning with at least 2+ years of relevant experience. In this role, you will be responsible for designing, developing, and optimizing machine learning solutions and data systems. Your proven track record in implementing ML models, building scalable systems, and collaborating with cross-functional teams will be essential in solving complex challenges using data-driven approaches. As a Software Engineer - Machine Learning, your primary responsibilities will include designing and implementing end-to-end machine learning solutions, building and optimizing scalable data pipelines, collaborating with data scientists and product teams, monitoring and optimizing deployed models, staying updated with the latest trends in machine learning, debugging complex issues related to ML systems, and documenting processes for knowledge sharing and clarity. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or related fields. Your technical skills should include a strong proficiency in Python and machine learning libraries such as TensorFlow, PyTorch, or scikit-learn, experience with data processing tools like Pandas, NumPy, and Spark, proficiency in SQL and database systems, hands-on experience with cloud platforms (AWS, GCP, Azure), familiarity with CI/CD pipelines and Git, and experience with model deployment frameworks like Flask, FastAPI, or Docker. Additionally, you should possess strong analytical skills, leadership abilities to guide junior team members, and a proactive approach to learning and collaboration. Preferred qualifications include experience with MLOps tools like MLflow, Kubeflow, or SageMaker, knowledge of big data technologies such as Hadoop, Spark, or Kafka, familiarity with advanced ML techniques like NLP, computer vision, or reinforcement learning, and experience in designing and managing streaming data workflows. Key Performance Indicators for this role include successfully delivering optimized and scalable ML solutions within deadlines, maintaining high model performance in production environments, and ensuring seamless integration of ML models with business applications. Join us in this exciting opportunity to drive innovation and make a significant impact in the field of Machine Learning.,
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for contributing to the development and deployment of machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms as a part of a larger team. Contributes to translating application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Contributes to developing ways to use machine learning to solve problems and discover new products, working on a portion of the problem and collaborating with more senior researchers as needed. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers, and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to support dynamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and exploration of emerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 3+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts Certifications in cloud architecture, ML engineering, or data science specialization Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years
Posted 1 day ago
3.0 - 8.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As an AI QA Engineer at Techjays, you will have the exciting opportunity to be part of a global team that is shaping the future of artificial intelligence. Techjays is a leading company in the AI space, dedicated to empowering businesses worldwide by creating transformative AI solutions. Our collaborative and agile approach, combined with deep expertise, enables us to deliver impactful technology that drives meaningful change across industries. Join a team of professionals who have honed their skills at renowned companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. At Techjays, you will work on cutting-edge projects that redefine industries, leverage the latest technologies, and contribute to solutions that have a real-world impact. If you are passionate about AI, adept at testing complex systems, and committed to maintaining high standards of quality, this role is tailor-made for you. Your primary responsibilities as an AI QA Engineer will include designing, writing, and executing test plans and test cases for AI/ML-based applications. You will collaborate closely with data scientists, ML engineers, and developers to ensure the quality, safety, and reliability of our AI-powered products and features. Conducting functional, regression, and exploratory testing on AI components and APIs will be a key part of your role. Additionally, you will validate model outputs for accuracy, fairness, bias, and explainability, and implement adversarial testing, edge cases, and out-of-distribution data scenarios. To excel in this role, you must possess foundational QA skills with a strong knowledge of test design, defect management, and the QA lifecycle. Having experience with risk-based testing and QA strategy is essential. Furthermore, a basic understanding of machine learning workflows, training/inference cycles, and AI quality challenges such as bias, fairness, and transparency is required. Proficiency in Python programming for test automation, hands-on experience with API testing tools, and familiarity with Git, CI/CD pipelines, and integration testing are also key technical capabilities for this role. In addition to technical skills, soft skills such as excellent analytical thinking, attention to detail, strong collaboration, and communication abilities are vital. A proactive work ethic, passion for learning new technologies, and contributing to AI quality practices are qualities that will set you up for success at Techjays. Techjays offers competitive compensation packages, paid holidays, flexible paid time away, a casual dress code, and a flexible working environment. Medical insurance covering yourself and your family, professional development opportunities in an engaging, fast-paced environment, and a diverse multicultural work environment are some of the benefits of being part of Techjays. Embrace the opportunity to work in an innovation-driven culture that provides the support and resources needed for your success in shaping the future with AI.,
Posted 1 day ago
10.0 - 12.0 years
0 Lacs
Delhi, India
On-site
Sr/Lead ML Engineer Placement type (FTE/C/CTH): C/CTH Duration : 6 month with extension Location: Phoenix AZ, must be onsite 5 days a week Start Date: 2 weeks from the offer Interview Process One and done Reason for position Integration ML to the Observability Grafana platform Team Overview Onshore and offshore Project Description AI/ML for Observability (AIOps) Developed machine learning and deep learning solutions for observability data to enhance IT operations. Implemented time series forecasting, anomaly detection, and event correlation models. Integrated LLMs using prompt engineering, fine-tuning, and RAG for incident summarization. Built MCP client-server architecture for seamless integration with the Grafana ecosystem. Duties/Day to Day Overview Machine Learning & Model Development Design and develop ML/DL models for: Time series forecasting (e.g., system load, CPU/memory usage) Anomaly detection in logs, metrics, or traces Event classification and correlation to reduce alert noise Select, train, and tune models using frameworks like TensorFlow, PyTorch, or scikit-learn Evaluate model performance using metrics like precision, recall, F1-score, and AUC ML Pipeline Engineering Build scalable data pipelines for training and inference (batch or streaming) Preprocess large observability datasets from tools like Prometheus, Kafka, or BigQuery Deploy models using cloud-native services (e.g., GCP Vertex AI, Azure ML, Docker/Kubernetes) Maintain retraining pipelines and monitor for model drift LLM Integration for Observability Intelligence Implement LLM-based workflows for summarizing incidents or logs Develop and refine prompts for GPT, LLaMA, or other large language models Integrate Retrieval-Augmented Generation (RAG) with vector databases (e.g., FAISS, Pinecone) Control latency, hallucinations, and cost in production LLM pipelines Grafana & MCP Ecosystem Integration Build or extend MCP client/server components for Grafana Surface ML model outputs (e.g., anomaly scores, predictions) in observability dashboards Collaborate with observability engineers to integrate ML insights into existing monitoring tools Collaboration & Agile Delivery Participate in daily stand-ups, sprint planning, and retrospectives Collaborate with: Data engineers on pipeline performance and data ingestion Frontend developers for real-time data visualizations SRE and DevOps teams for alert tuning and feedback loop integration Translate model outputs into actionable insights for platform teams Testing, Documentation & Version Control Write unit, integration, and regression tests for ML code and pipelines Maintain documentation on models, data sources, assumptions, and APIs Use Git, CI/CD pipelines, and model versioning tools (e.g., MLflow, DVC) Top Requirements (Must haves) AI ML Engineer Skills Design and develop machine learning algorithms and deep learning applications and systems for Observability data (AIOps) Hands on experience in Time series forecasting/prediction, anomaly detection ML algorithms Hands on experience in event classification and correlation ML algorithms Hands on experience on integrating with LLMs with prompt/fine-tuning/rag for effective summarization Working knowledge on implementing MCP client and server for Grafana Eco-system or similar exposure Key Skills: Programming languages: Python, R ML Frameworks: TensorFlow, PyTorch, scikit-learn Cloud platforms: Google Cloud, Azure Front-End Frameworks/Libraries: Experience with frameworks like React, Angular, or Vue.js, and libraries like jQuery. Design Tools: Proficiency in design software like Figma, Adobe XD, or Sketch. Databases: Knowledge of database technologies like MySQL, MongoDB, or PostgreSQL. Server-Side Languages: Familiarity with server-side languages like Python, Node.js, or Java. Version Control: Experience with Git and other version control systems. Testing: Knowledge of testing frameworks and methodologies. Agile Development: Experience with agile development methodologies. Communication and Collaboration: Strong communication and collaboration skills. Experience: Lead – 10 to 12 Years (Onshore and Offshore). Developers - 6 to 8 Years for Engineers
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough