Home
Jobs

661 Sagemaker Jobs - Page 14

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

25 Lacs

Greater Lucknow Area

Remote

Linkedin logo

Experience : 3.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+11:00) Australia/Melbourne (AEDT) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Okkular) What do you need for this opportunity? Must have skills required: Communication Skills, problem-solvers, Agentic AI, AWS services (Lambda, FastAI, LangChain, Large Language Model (LLM), Natural Language Processing (NLP), Pytorch, Sagemaker, Step Functions), Go Lang, Python Okkular is Looking for: About The Job Company Description: We are a leading provider of fashion e-commerce solutions, leveraging generative AI to empower teams with innovative tools for merchandising and product discovery. Our mission is to enhance every product page with engaging customer-centric narratives, propelling accelerated growth and revenue generation. Join us in shaping the future of online fashion retail through cutting-edge technology and unparalleled creativity within the Greater Melbourne Area. Role Description: This is a full-time remote working position in India as a Senior AI Engineer . The Senior AI Engineer will be responsible for pattern recognition, neural network development, software development, and natural language processing tasks on a daily basis. Qualifications: Proficiency in sklearn, PyTorch, and fastai for implementing algorithms and training/improving models. Familiarity with Docker, AWS cloud services like Lambda, SageMaker, Bedrock. Familiarity with Streamlit. Knowledge of LangChain, LlamaIndex, Ollama, OpenRouter, and other relevant technologies. Expertise in pattern recognition and neural networks. Experience in Agentic AI development. Strong background in Computer Science and Software Development. Knowledge of Natural Language Processing (NLP). Ability to work effectively in a fast-paced environment and collaborate with cross-functional teams. Strong problem-solving skills and attention to detail. Master’s or PhD in Computer Science, AI, or a related field is preferred, but not mandatory. Strong experience in the field is sufficient alternative. Prior experience in fashion e-commerce is advantageous. Languages: Python, Golang Engagement Type: Direct-hire Job Type : Permanent Location : Remote Working time: 2:30 PM IST to 11:30 PM IST How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

3.0 years

25 Lacs

Thane, Maharashtra, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+11:00) Australia/Melbourne (AEDT) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Okkular) What do you need for this opportunity? Must have skills required: Communication Skills, problem-solvers, Agentic AI, AWS services (Lambda, FastAI, LangChain, Large Language Model (LLM), Natural Language Processing (NLP), Pytorch, Sagemaker, Step Functions), Go Lang, Python Okkular is Looking for: About The Job Company Description: We are a leading provider of fashion e-commerce solutions, leveraging generative AI to empower teams with innovative tools for merchandising and product discovery. Our mission is to enhance every product page with engaging customer-centric narratives, propelling accelerated growth and revenue generation. Join us in shaping the future of online fashion retail through cutting-edge technology and unparalleled creativity within the Greater Melbourne Area. Role Description: This is a full-time remote working position in India as a Senior AI Engineer . The Senior AI Engineer will be responsible for pattern recognition, neural network development, software development, and natural language processing tasks on a daily basis. Qualifications: Proficiency in sklearn, PyTorch, and fastai for implementing algorithms and training/improving models. Familiarity with Docker, AWS cloud services like Lambda, SageMaker, Bedrock. Familiarity with Streamlit. Knowledge of LangChain, LlamaIndex, Ollama, OpenRouter, and other relevant technologies. Expertise in pattern recognition and neural networks. Experience in Agentic AI development. Strong background in Computer Science and Software Development. Knowledge of Natural Language Processing (NLP). Ability to work effectively in a fast-paced environment and collaborate with cross-functional teams. Strong problem-solving skills and attention to detail. Master’s or PhD in Computer Science, AI, or a related field is preferred, but not mandatory. Strong experience in the field is sufficient alternative. Prior experience in fashion e-commerce is advantageous. Languages: Python, Golang Engagement Type: Direct-hire Job Type : Permanent Location : Remote Working time: 2:30 PM IST to 11:30 PM IST How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

3.0 years

25 Lacs

Nagpur, Maharashtra, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+11:00) Australia/Melbourne (AEDT) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Okkular) What do you need for this opportunity? Must have skills required: Communication Skills, problem-solvers, Agentic AI, AWS services (Lambda, FastAI, LangChain, Large Language Model (LLM), Natural Language Processing (NLP), Pytorch, Sagemaker, Step Functions), Go Lang, Python Okkular is Looking for: About The Job Company Description: We are a leading provider of fashion e-commerce solutions, leveraging generative AI to empower teams with innovative tools for merchandising and product discovery. Our mission is to enhance every product page with engaging customer-centric narratives, propelling accelerated growth and revenue generation. Join us in shaping the future of online fashion retail through cutting-edge technology and unparalleled creativity within the Greater Melbourne Area. Role Description: This is a full-time remote working position in India as a Senior AI Engineer . The Senior AI Engineer will be responsible for pattern recognition, neural network development, software development, and natural language processing tasks on a daily basis. Qualifications: Proficiency in sklearn, PyTorch, and fastai for implementing algorithms and training/improving models. Familiarity with Docker, AWS cloud services like Lambda, SageMaker, Bedrock. Familiarity with Streamlit. Knowledge of LangChain, LlamaIndex, Ollama, OpenRouter, and other relevant technologies. Expertise in pattern recognition and neural networks. Experience in Agentic AI development. Strong background in Computer Science and Software Development. Knowledge of Natural Language Processing (NLP). Ability to work effectively in a fast-paced environment and collaborate with cross-functional teams. Strong problem-solving skills and attention to detail. Master’s or PhD in Computer Science, AI, or a related field is preferred, but not mandatory. Strong experience in the field is sufficient alternative. Prior experience in fashion e-commerce is advantageous. Languages: Python, Golang Engagement Type: Direct-hire Job Type : Permanent Location : Remote Working time: 2:30 PM IST to 11:30 PM IST How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

3.0 years

25 Lacs

Nashik, Maharashtra, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+11:00) Australia/Melbourne (AEDT) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Okkular) What do you need for this opportunity? Must have skills required: Communication Skills, problem-solvers, Agentic AI, AWS services (Lambda, FastAI, LangChain, Large Language Model (LLM), Natural Language Processing (NLP), Pytorch, Sagemaker, Step Functions), Go Lang, Python Okkular is Looking for: About The Job Company Description: We are a leading provider of fashion e-commerce solutions, leveraging generative AI to empower teams with innovative tools for merchandising and product discovery. Our mission is to enhance every product page with engaging customer-centric narratives, propelling accelerated growth and revenue generation. Join us in shaping the future of online fashion retail through cutting-edge technology and unparalleled creativity within the Greater Melbourne Area. Role Description: This is a full-time remote working position in India as a Senior AI Engineer . The Senior AI Engineer will be responsible for pattern recognition, neural network development, software development, and natural language processing tasks on a daily basis. Qualifications: Proficiency in sklearn, PyTorch, and fastai for implementing algorithms and training/improving models. Familiarity with Docker, AWS cloud services like Lambda, SageMaker, Bedrock. Familiarity with Streamlit. Knowledge of LangChain, LlamaIndex, Ollama, OpenRouter, and other relevant technologies. Expertise in pattern recognition and neural networks. Experience in Agentic AI development. Strong background in Computer Science and Software Development. Knowledge of Natural Language Processing (NLP). Ability to work effectively in a fast-paced environment and collaborate with cross-functional teams. Strong problem-solving skills and attention to detail. Master’s or PhD in Computer Science, AI, or a related field is preferred, but not mandatory. Strong experience in the field is sufficient alternative. Prior experience in fashion e-commerce is advantageous. Languages: Python, Golang Engagement Type: Direct-hire Job Type : Permanent Location : Remote Working time: 2:30 PM IST to 11:30 PM IST How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

3.0 years

25 Lacs

Kanpur, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+11:00) Australia/Melbourne (AEDT) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Okkular) What do you need for this opportunity? Must have skills required: Communication Skills, problem-solvers, Agentic AI, AWS services (Lambda, FastAI, LangChain, Large Language Model (LLM), Natural Language Processing (NLP), Pytorch, Sagemaker, Step Functions), Go Lang, Python Okkular is Looking for: About The Job Company Description: We are a leading provider of fashion e-commerce solutions, leveraging generative AI to empower teams with innovative tools for merchandising and product discovery. Our mission is to enhance every product page with engaging customer-centric narratives, propelling accelerated growth and revenue generation. Join us in shaping the future of online fashion retail through cutting-edge technology and unparalleled creativity within the Greater Melbourne Area. Role Description: This is a full-time remote working position in India as a Senior AI Engineer . The Senior AI Engineer will be responsible for pattern recognition, neural network development, software development, and natural language processing tasks on a daily basis. Qualifications: Proficiency in sklearn, PyTorch, and fastai for implementing algorithms and training/improving models. Familiarity with Docker, AWS cloud services like Lambda, SageMaker, Bedrock. Familiarity with Streamlit. Knowledge of LangChain, LlamaIndex, Ollama, OpenRouter, and other relevant technologies. Expertise in pattern recognition and neural networks. Experience in Agentic AI development. Strong background in Computer Science and Software Development. Knowledge of Natural Language Processing (NLP). Ability to work effectively in a fast-paced environment and collaborate with cross-functional teams. Strong problem-solving skills and attention to detail. Master’s or PhD in Computer Science, AI, or a related field is preferred, but not mandatory. Strong experience in the field is sufficient alternative. Prior experience in fashion e-commerce is advantageous. Languages: Python, Golang Engagement Type: Direct-hire Job Type : Permanent Location : Remote Working time: 2:30 PM IST to 11:30 PM IST How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role Description Role Proficiency: Provide expertise on data analysis techniques using software tools. Under supervision streamline business processes. Outcomes Design and manage the reporting environment; which include data sources security and metadata. Provide technical expertise on data storage structures data mining and data cleansing. Support the data warehouse in identifying and revising reporting requirements. Support initiatives for data integrity and normalization. Assess tests and implement new or upgraded software. Assist with strategic decisions on new systems. Generate reports from single or multiple systems. Troubleshoot the reporting database environment and associated reports. Identify and recommend new ways to streamline business processes Illustrate data graphically and translate complex findings into written text. Locate results to help clients make better decisions. Solicit feedback from clients and build solutions based on feedback. Train end users on new reports and dashboards. Set FAST goals and provide feedback on FAST goals of repartees Measures Of Outcomes Quality - number of review comments on codes written Data consistency and data quality. Number of medium to large custom application data models designed and implemented Illustrates data graphically; translates complex findings into written text. Number of results located to help clients make informed decisions. Number of business processes changed due to vital analysis. Number of Business Intelligent Dashboards developed Number of productivity standards defined for project Number of mandatory trainings completed Outputs Expected Determine Specific Data needs: Work with departmental managers to outline the specific data needs for each business method analysis project Critical Business Insights Mines the business’s database in search of critical business insights; communicates findings to relevant departments. Code Creates efficient and reusable SQL code meant for the improvement manipulation and analysis of data. Creates efficient and reusable code. Follows coding best practices. Create/Validate Data Models Builds statistical models; diagnoses validates and improves the performance of these models over time. Predictive Analytics Seeks to determine likely outcomes by detecting tendencies in descriptive and diagnostic analysis Prescriptive Analytics Attempts to identify what business action to take Code Versioning Organize and manage the changes and revisions to code. Use a version control tool for example git bitbucket. etc. Create Reports Create reports depicting the trends and behaviours from analyzed data Document Create documentation for worked performed. Additionally perform peer reviews of documentation of others' work Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Status Reporting Report status of tasks assigned Comply with project related reporting standards and processes Skill Examples Analytical Skills: Ability to work with large amounts of data: facts figures and number crunching. Communication Skills: Communicate effectively with a diverse population at various organization levels with the right level of detail. Critical Thinking: Data Analysts must review numbers trends and data to come up with original conclusions based on the findings. Presentation Skills - facilitates reports and oral presentations to senior colleagues Strong meeting facilitation skills as well as presentation skills. Attention to Detail: Vigilant in the analysis to determine accurate conclusions. Mathematical Skills to estimate numerical data. Work in a team environment Proactively ask for and offer help Knowledge Examples Knowledge Examples Database languages such as SQL Programming language such as R or Python Analytical tools and languages such as SAS & Mahout. Proficiency in MATLAB. Data visualization software such as Tableau or Qlik. Proficient in mathematics and calculations. Efficiently with spreadsheet tools such as Microsoft Excel or Google Sheets DBMS Operating Systems and software platforms Knowledge regarding customer domain and sub domain where problem is solved Additional Comments Skill- Focused on end-to-end machine learning projects using AWS (Glue, SageMaker, CloudWatch) and Python/PySpark. Skills Python,ML,Sagemaker Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

The candidate should have experience in AI Development including experience in developing, deploying, and optimizing AI and Generative AI solutions. The ideal candidate will have a strong technical background, hands-on experience with modern AI tools and platforms, and a proven ability to build innovative applications that leverage advanced AI techniques. You will work collaboratively with cross-functional teams to deliver AI-driven products and services that meet business needs and delight end-users. Job Duties and Responsibilities: Define and maintain project plans, schedule, requirements, document for product deliverables. Define project scope, deliverables, roles, and responsibilities in collaboration with Product Owner and stakeholders as per defined organization framework (XPMC) Follow agile methodology and maintain team dashboard (KANBAN) – Assign work to project team, track, and monitor team deliverables. Provide recommendations based on best practices and industry standards. Work closely with the team to ensure adherence to schedule timelines. Key Prerequisites Experience in AI and Generative AI Development Experience in Design, develop, and deploy AI models for various use cases, such as predictive analytics, recommendation systems, and natural language processing (NLP). Experience in Building and fine-tuning Generative AI models for applications like chatbots, text summarization, content generation, and image synthesis. Experience in implementation and optimization of large language models (LLMs) and transformer-based architectures (e.g., GPT, BERT). Experience in ingestion and cleaning of data Feature Engineering and Data Engineering Experience in Design and implementation of data pipelines for ingesting, processing, and storing large datasets. Experience in Model Training and Optimization Exposure to deep learning models and fine-tuning pre-trained models using frameworks like TensorFlow, PyTorch, or Hugging Face. Exposure to optimization of models for performance, scalability, and cost-efficiency on cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI). Hands-on experience in monitoring and improving model performance through retraining and evaluation metrics like accuracy, precision, and recall. AI Tools and Platform Expertise OpenAI, Hugging Face MLOps tools Generative AI-specific tools and libraries for innovative applications. Technical Skills Strong programming skills in Python (preferred) or other languages like Java, R, or Julia. Expertise in AI frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. Proficiency in working with transformer-based models (e.g., GPT, BERT, T5, DALL-E). Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization tools (Docker, Kubernetes). Solid understanding of databases (SQL, NoSQL) and big data processing tools (Spark, Hadoop). Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Role Description Job Summary: We are seeking a highly skilled Senior Python Developer with expertise in Machine Learning (ML) , Large Language Models (LLMs) , and cloud technologies . The ideal candidate will be responsible for end-to-end execution — from requirement analysis and discovery to the design, development, and implementation of ML-driven solutions. The role demands both technical excellence and strong communication skills to work directly with clients, delivering POCs, MVPs, and scalable production systems. Key Responsibilities Collaborate with clients to understand business needs and identify ML-driven opportunities. Independently design and develop robust ML models, time series models, deep learning solutions, and LLM-based systems. Deliver Proof of Concepts (POCs) and Minimum Viable Products (MVPs) with agility and innovation. Architect and optimize Python-based ML applications focusing on performance and scalability. Utilize GitHub for version control, collaboration, and CI/CD automation. Deploy ML models on cloud platforms such as AWS, Azure, or GCP. Follow best practices in software development including clean code, automated testing, and thorough documentation. Stay updated with evolving trends in ML, LLMs, and cloud ecosystem. Work collaboratively with Data Scientists, DevOps engineers, and Business Analysts. Must-Have Skills Strong programming experience in Python and frameworks such as FastAPI, Flask, or Django. Solid hands-on expertise in ML using Scikit-learn, TensorFlow, PyTorch, Prophet, etc. Experience with LLMs (e.g., OpenAI, LangChain, Hugging Face, vector search). Proficiency in cloud services like AWS (S3, Lambda, SageMaker), Azure ML, or GCP Vertex AI. Strong grasp of software engineering concepts: OOP, design patterns, data structures. Experience in version control systems (Git/GitHub/GitLab) and setting up CI/CD pipelines. Ability to work independently and solve complex problems with minimal supervision. Excellent communication and client interaction skills. Skills Python,Machine Learning,Machine Learning Models Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Linkedin logo

Walk-In Interview Details Dates: 4th to 6th June 2025 Time: 12:00 PM to 4:00 PM Venue: TELUS Digital, 2nd Floor, Fintech One, GIFT City, Gandhinagar, 382355 Roles & Responsibilities : Annotate and label datasets accurately using specialized tools and guidelines Review and correct existing annotations to ensure data quality Collaborate with machine learning engineers and data scientists to understand annotation requirements Follow detailed instructions and apply judgment to edge cases and ambiguous data Meet project deadlines and maintain high levels of accuracy and efficiency Provide feedback to improve annotation guidelines and workflows Participate in training sessions to stay updated on evolving tools and techniques Requirements : BA, BBA, B.Com, B.Tech, BCA, and other Management streams Strong attention to detail and ability to follow complex instructions Basic computer skills and familiarity with data entry or annotation tools Good communication skills and the ability to work independently or in a team Experience with data labeling tools (e.g., Labelbox, CVAT, Scale AI, Amazon SageMaker Ground Truth) is a plus Familiarity with AI/ML concepts is a bonus Perks and Benefits Salary: 2.5 LPA - 3.0 LPA Medicare Benefits Both side cab facilities. Medical Insurance Life Insurance Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

Why NeuraFlash: At NeuraFlash, we are redefining the future of business through the power of AI and groundbreaking technologies like Agentforce. As a trusted leader in AI, Amazon, and Salesforce innovation, we craft intelligent solutions—integrating Salesforce Einstein, Service Cloud Voice, Amazon Connect, Agentforce and more—to revolutionize workflows, elevate customer experiences, and deliver tangible results. From conversational AI to predictive analytics, we empower organizations to stay ahead in an ever-evolving digital landscape with cutting-edge, tailored strategies. We are proud to be creating the future of generative AI and AI agents. Salesforce has launched Agentforce, and NeuraFlash was selected as the only partner for the private beta prior to launch. Post-launch, we've earned the distinction of being Salesforce's #1 partner for Agentforce, reinforcing our role as pioneers in this transformative space. Be part of the NeuraFlash journey and help shape the next wave of AI-powered transformation. Here, you'll collaborate with trailblazing experts who are passionate about pushing boundaries and leveraging technologies like Agentforce to create impactful customer outcomes. Whether you're developing advanced AI-powered bots, streamlining business operations, or building solutions using the latest generative AI technologies, your work will drive innovation at scale. If you're ready to make your mark in the AI space, NeuraFlash is the place for you. AS AN AWS MANAGER / SR. MANAGER, YOU WILL HAVE THE OPPORTUNITY TO EXECUTE THE FOLLOWING: Managerial Roles & Responsibilities Act as the AWS subject matter expert, providing leadership and guidance to internal teams Coach, mentor, and develop junior AWS team members while setting clear goals and expectations Conduct regular performance reviews, 1-on-1 meetings, and personal development planning Manage project resource staffing, utilization, and capacity planning while actively contributing to talent acquisition Drive hiring strategies to support organizational growth Build and lead high-performing teams, fostering a collaborative and motivated work culture Establish and oversee OKRs, ensuring alignment with business objectives Collaborate with business leaders to prioritize initiatives and drive impactful outcomes Represent the company and team in talent acquisition efforts, analyzing cultural fit and promoting an engaging and inclusive workplace Balance people management responsibilities with individual contributions as an AWS Solution/Technical Architect Technical Roles & Responsibilities Demonstrate deep expertise in AWS infrastructure, security, and compliance, ensuring alignment with best practices and regulatory requirements Architect, implement, and optimize AWS solutions, focusing on scalability, cost-efficiency, and resilience Oversee cloud governance, automation, and DevOps best practices to enhance operational efficiency Lead complex AWS projects, integrating services like Amazon Connect, Amazon Lex, AWS SageMaker, Amazon Q, and Amazon Bedrock Drive innovation in Cloud Contact Center as a Service (CCaaS) by leveraging AWS and third-party platforms (Genesys, Twilio, Avaya, NICE CX, TalkDesk, Five9, RingCentral) Ensure seamless integration between Amazon Connect and Salesforce Service Cloud (SCV BYOA, SCV Bundle) Identify and address technical challenges, proactively resolving roadblocks for teams Advocate for AWS best practices and provide technical leadership in solution design Qualifications Minimum 3 years of experience leading and managing technical teams For Sr Manager roles, experience managing Manager(s) is an added advantage AWS Solution Architect Associate Certification Strong expertise in AWS cloud architecture, security, compliance, and automation Proven track record of leading technical teams while driving innovation and efficiency Experience in the Contact Center domain, particularly with Amazon Connect Knowledge of AI/ML-driven solutions and AWS services such as SageMaker, Bedrock, and Amazon Q Experience integrating Amazon Connect with Salesforce Service Cloud models (SCV BYOA, SCV Bundle) is a plus Background in Cloud Contact Center solutions such as Genesys, Twilio, Avaya, NICE CX, TalkDesk, Five9, or RingCentral is preferred What's it like to be a part of NeuraFlash? Remote & In-Person: Whether you work out of our HQ in Massachusetts, one of our regional hubs, or you're one of over half of our NeuraFlash Family who work remotely, we're focused on keeping everyone connected and unified as one team. Travel: Get ready to pack your bags and hit the road! For certain roles, travel is an exciting part of the job, with an anticipated travel commitment of up to 25%. So, if you have a passion for adventure and don't mind a little jet-setting, this opportunity could be your ticket to exploring new places while making a positive impact on clients. Flexibility: Do you have to take the dog to the vet, pick up the kids from school, or the in-laws from the airport? We know that a perfect 9-5 isn't possible. So you have to jump out to do any of those, no problem! We build a culture of trust and understanding. We value good work not the hours in which you get it done Collaboration: You have a voice here! If you work with a team of smart people like we do, it's a no-brainer to take suggestions and feedback on how to keep NeuraFlash thriving. Our executive team holds town halls & company meetings where they address any suggestions or questions asked, no matter how big or small. Celebrate Often: We take our work seriously, but we don't take ourselves too seriously. Whether it is an arm wrestling contest, costume party, or ugly holiday sweaters our teams love to have fun. And while we work hard, we don't forget to slow down and celebrate the big things and the small things together. Location: NeuraFlash strives to provide you with the flexibility to work in the location that makes the most sense for your lifestyle. For those that prefer an office setting, this role may be based in any of our hub locations within the United States. If you prefer to work from home, we can accommodate remote locations for our employees based in the United States, anywhere within Alberta, British Columbia, or Ontario for our Canada-based employees, anywhere in India for our India-based employees, and anywhere within Colombia for our Colombia-based employees! Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Linkedin logo

Title : Sr. Data Scientist/ML Engineer (4+ years & above) Required Technical Skillset Language : Python, PySpark Framework : Scikit-learn, TensorFlow, Keras, PyTorch, Libraries : NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database : Relational Database(Postgres), NoSQL Database (MongoDB) Cloud : AWS cloud platforms Other Tools : Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes : Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements And Skills Mathematics and Statistics : A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis And Visualization Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication And Collaboration A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

10.0 - 15.0 years

25 - 35 Lacs

Gurugram, Bengaluru

Hybrid

Naukri logo

Role & responsibilities Analyze, investigate, and recommend solutions for continuous improvements, process enhancements, identify pain points, and more efficient workflows. Create templates, standards, and models to facilitate future implementations and adjust priorities when necessary. Demonstrate that you are a collaborative communicator with architects, designers, business system analysts, application analysts, operation teams and testing specialists to deliver fully automated ALM systems. Confidently speaking up, bringing people together, facilitating meetings, recording minutes and actions, and rallying the team towards a common goal Deploy, configure, manage, and perform ongoing maintenance of technical infrastructure including all DevOps tooling used by our Canadian IT squads Set-up and maintain fully automated CI/CD pipeline for multiple Java / .NET environments using tools like Bitbucket, Jenkins, Ansible, Docker etc. Guide development teams with the preparation of releases for production. This may include assisting in the automation of performance tests, validation of infrastructure requirements, and guiding the team with respect to system decisions Create or improve the automated deployment processes, techniques, and tools Troubleshoot and resolve technical operational issues related to IT Infrastructure Review and analyze organizational needs and goals to determine future impacts to applications and systems Ensure information security standards and requirements are incorporated into all solutions Stay current with trends in emerging technologies and how they could apply to Sun Life Preferred candidate profile 10+ years of continuous Integration and delivery (CI/CD) experience in a systems development life cycle environment using Bitbucket, Jenkins, CDD, etc. Self sufficient and experienced with either modern programming languages (e.g. Java or C#), or scripting languages such as SageMaker Python, YAML or similar. Working knowledge of SQL, Tableau, Grafana. Advanced knowledge of DevOps with a security and automation mindset Knowledge of using and configuring build tools and orchestration such as Jenkins, SonarQube, Checkmarx, Snyk, Artifactory, Azure DevOps, Docker, Kubernetes, OpenShift, Ansible, Continuous Delivery Director (CDD) Advanced knowledge of deployment (i.e. Ansible, Chef) and containerization (Docker/Kubernetes) tooling IAAS/PAAS/SAAS deployment and operations experience Experience with native mobile development on iOS and/or Android is an asset Experience with source code management tools such as Bitbucket, Git, TFS

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience: 5-10years Location: Hyderabad/Chennai Must-Have** (Ideally should not be more than 3-5): 1.In depth knowledge on PySpark such as reading data from external sources, merging data, performing data enrichment and loading into target data destinations 2. In depth knowledge of developing, training & deploying ML Models 3. Knowledge on Machine learning concepts & ML algorithms Good-to-Have: 1. Exposure to job scheduling & monitoring environments(E.gControl-M) 2. Any ETL tool exposure 3. Cloud migration experience Responsibility of / Expectations from the Role Developing data processing tasks using PySpark such as reading data from external sources, merging data, performing data enrichment and loading into target data destinations Build scalable and reusable code for optimized data retrieval & movement across sources Develop libraries and maintain processes for business to access data and write MapReduce programs Write scalable and maintainable scripts using Python for data transfers Assessing, prioritizing and guiding team in designing & development of features as per business Requirements Ability to fetche data from various sources and analyzes it for better understanding about how the business performs, and builds AI tools that automate certain processes within the environment Deep technical understanding of how to communicate complex data in an accessible way while also having the ability to visualize their findings Ability to build, train, and deploy ML models into a production-ready hosted environment like AWS Sagemaker Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Machine Learning Engineer We’re looking for someone to deploy, automate, maintain and monitor machine learning models and algorithms to make sure they work effectively in a production environment Day-to-day, you’ll collaborate with colleagues to design and develop state-of-the-art machine learning products which power our group for our customers This is your opportunity to turn your interests into a diverse and rewarding career, as you solve new problems and create smarter solutions in a non-stop innovation environment What you’ll do Your daily responsibilities will see you codifying and automating machine learning model production, including pipeline optimisation, tuning and fault finding, as well as transforming data science prototypes and applying appropriate machine learning algorithms and tools. We’ll need you to deploy and maintain adopted end-to-end solutions, including building metrics to improve system performance and identifying and resolving differences in data distribution which affect model performance. You’ll also maintain knowledge of data science and machine learning. In Addition, You’ll Be Responsible For Understanding the needs of our business stakeholders, and how machine learning solutions meet those needs to support the achievement of our business strategy Working with colleagues to produce machine learning models, including pipeline designs, development, testing and deployment to carry on the intent and knowledge into production Creating frameworks to make sure the monitoring of machine learning models within the production environment is robust Delivering models that adhere to expected quality and performance while understanding and addressing any shortfalls, for example through retraining Working in an Agile way within multi-disciplinary data and the analytics teams to achieve agreed project and Scrum outcomes The skills you’ll need To be successful in this role, you’ll have an academic background in a STEM discipline, like Mathematics, Physics, Engineering or Computer Science. You’ll need experience with machine learning on large datasets and an understanding of machine learning approaches and algorithms. Alongside this, you’ll have experience of building, testing, supporting and deploying machine learning models into a production environment, using modern CI/CD tools, like TeamCity and CodeDeploy. You’ll also have good communication skills to engage with a wide range of stakeholders. Furthermore, You’ll Need Experience of using programming and scripting languages, such as Python and Bash An understanding of how to synthesise, translate and visualise data and insights for key stakeholders Understanding of the capabilities and experience with Large Language Models and their APIs Ability to read and understand a large documentation base, as well as contribute to it Desire to understand the business requirements and limitations, and expertise to make relevant suggestions Strong software engineering, systems architecture, and unit testing capabilities Experience with AWS/other cloud providers Experience with GitLab CI/CD pipelines for automated testing and deployments Experience using pipeline tools such as Apache Airflow, Amazon SageMaker or similar Familiarity with SQL Experience with MLOps and model monitoring tools such as Splunk, Comet ML, etc An understanding of how to present the data and insights for key stakeholders Financial services knowledge and the ability to identify wider business impacts, risks and opportunities to make connections across key outputs and processes Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: AI/ML Developer Duration: 6 months (Expected to be longer) Work Location & Requirement: Chennai, onsite at least 3-4 days a week Position Summary: We are seeking a highly skilled and motivated Development Lead with deep expertise in ReactJS, Python, and AI/ML DevOps, along with working familiarity in AWS cloud services. This is a hands-on individual contributor role focused on developing and deploying a full-stack AI/ML-powered web application. The ideal candidate should be passionate about building intelligent, user-centric applications and capable of owning the end-to-end development process. Position Description: Design and develop intuitive and responsive web interfaces using ReactJS. Build scalable backend services and RESTful APIs using Python frameworks (e.g., Flask, FastAPI, or Django). Integrate AI/ML models into the application pipeline and support inferencing, monitoring, and retraining flows. Automate development workflows and model deployments using DevOps best practices and tools (Docker, CI/CD, etc.). Deploy applications and ML services on AWS infrastructure, leveraging services such as EC2, S3, Lambda, SageMaker, and EKS. Ensure performance, security, and reliability of the application through testing, logging, and monitoring. Collaborate with data scientists, designers, and product stakeholders to refine and implement AI-powered features. Take ownership of application architecture, development lifecycle, and release management. Minimum Requirements: Bachelor’s or Master’s degree in computer science, Engineering, or a related field. 8+ years of hands-on experience in software development. Strong expertise in ReactJS, and NodeJS based web application development. Proficient in Python for backend development and AI/ML model integration. Experience with at least one AI/ML framework (LLMs). Solid understanding of DevOps concepts for ML workflows – containerization, CI/CD, testing, and monitoring. Experience deploying and operating applications in AWS cloud environments. Self-driven, with excellent problem-solving skills and attention to detail. Strong communication skills and ability to work independently in an agile, fast-paced environment. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

We specialize in delivering high-quality human-curated data and AI-first scaled operations services Based in San Francisco and Hyderabad, we are a fast-moving team on a mission to build AI for Good, driving innovation and societal impact Role Overview: We are looking for a Data Scientist to join and build intelligent, data-driven solutions for our client that enable impactful decisions This role requires contributions across the data science lifecycle from data wrangling and exploratory analysis to building and deploying machine learning models Whether youre just getting started or have years of experience, were looking for individuals who are curious, analytical, and driven to make a difference with data Responsibilities: Design, develop, and deploy machine learning models and analytical solutions Conduct exploratory data analysis and feature engineering Own or contribute to the end-to-end data science pipeline: data cleaning, modeling, validation, and deployment Collaborate with cross-functional teams (engineering, product, business) to define problems and deliver measurable impact Translate business challenges into data science problems and communicate findings clearly Implement A/B tests, statistical tests, and experimentation strategies Support model monitoring, versioning, and continuous improvement in production environments Evaluate new tools, frameworks, and best practices to improve model accuracy and scalability Required Skills: Strong programming skills in Python including libraries such as pandas, NumPy, scikit-learn, matplotlib, seaborn Proficient in SQL, comfortable querying large, complex datasets Sound understanding of statistics, machine learning algorithms, and data modeling Experience building end-to-end ML pipelines Exposure to or hands-on experience with model deployment tools like FastAPI, Flask, MLflow Experience with data visualization and insight communication Familiarity with version control tools (eg, Git) and collaborative workflows Ability to write clean, modular code and document processes clearly Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Familiarity with data engineering tools like Apache Spark, Kafka, Airflow, dbt Exposure to MLOps practices and managing models in production environments Working knowledge of cloud platforms like AWS, GCP, or Azure (e, SageMaker, BigQuery, Vertex AI) Experience designing and interpreting A/B tests or causal inference models Prior experience in high-growth startups or cross-functional leadership roles Educational Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Mathematics, Engineering, or a related field Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,India

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description and Requirements "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The DSOM product line includes BMC’s industry-leading Digital Services and Operation Management products. We have many interesting SaaS products, in the fields of: Predictive IT service management, Automatic discovery of inventories, intelligent operations management, and more! We continuously grow by adding and implementing the most cutting-edge technologies and investing in Innovation! Our team is a global and versatile group of professionals, and we LOVE to hear our employees’ innovative ideas. So, if Innovation is close to your heart – this is the place for you! BMC is looking for an experienced Data Science Engineer with hands-on experience with Classical ML, Deep Learning Networks and Large Language Models, knowledge to join us and design, develop, and implement microservice based edge applications , using the latest technologies. In this role, you will be responsible for End-to-end design and execution of BMC Data Science tasks, while acting as a focal point and expert for our data science activities. You will research and interpret business needs, develop predictive models, and deploy completed solutions. You will provide expertise and recommendations for plans, programs, advance analysis, strategies, and policies. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Ideate, design, implement and maintain enterprise business software platform for edge and cloud, with a focus on Machine Learning and Generative AI Capabilities , using mainly P ython Work with a globally distributed development team to perform requirements analysis, write design documents, design, develop and test software development projects. Understand real world deployment and usage scenarios from customers and product managers and translate them to AI/ML features that drive value of the product. Work closely with product managers and architects to understand requirements, present options, and design solutions. Work closely with customers and partners to analyze time-series data and suggest the right approaches to drive adoption. Analyze and clearly communicate both verbally and in written form the status of projects or issues along with risks and options to the stakeholders. To ensure you’re set up for success, you will bring the following skillset & experience: You have 8 + years of hands-on experience in data science or machine learning roles. You have experience working with sensor data, time-series analysis, predictive maintenance, anomaly detection, or similar IoT-specific domains. You have strong understanding of the entire ML lifecycle: data collection, preprocessing, model training, deployment, monitoring, and continuous improvement. You have proven experience designing and deploying AI/ML models in real-world IoT or edge computing environments. You have strong knowledge of machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch , XGBoost ). Whilst these are nice to have, our team can help you develop in the following skills: Experience with digital twins, real-time analytics, or streaming data systems. Contribution to open-source ML/AI/IoT projects or relevant publications. Experience with Agile development methodology and best practice in unit testin Experience with Kubernetes (kubectl, helm) will be an advantage. Experience with cloud platforms (AWS, Azure, GCP) and tools for ML deployment (SageMaker, Vertex AI, MLflow, etc.). Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. < Back to search results BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 8,047,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Summary Responsible for designing and developing a cutting-edge AI and Generative AI infrastructure on AWS Cloud platform and COLO, tailored for pharmaceutical business use-cases. The platform will facilitate Biomedical research Scientists and other business users for early molecule development and other research activities by providing robust, scalable, and secure computing resources. About the Role MAJOR ACCOUNTABILITIES: Architect and Design : Design and develop a GPU based AI infrastructure platform, with a focus on supporting Generative AI workloads and advanced analytics for pharma business use-cases like BioNeMo, Alpha Fold, ESM Fold, Open Fold, ProtGPT2 and NVIDIA Clara suite. Platform Development : Work with Biomedical Reseacrh scientists to develop and implement technical solutions for ML/Ops (Run:AI) hosted on K8 EKS cluster. Data Management : Oversee the design and implementation of data storage, retrieval, and processing pipelines, ensuring the efficient handling of large datasets, including genomics and chemical compound data. Security and Compliance : In collaboration with cloud domain security architects, implement robust security measures for multi-cloud environment and ensure compliance with relevant industry standards, particularly in handling business sensitive data. Collaboration : Work closely with Biomedical Reseacrh & Data scientists and other business stakeholders to understand their needs and translate them into technical solutions. Performance Optimization : Optimize the performance and cost-efficiency of the platform, including monitoring and scaling resources as needed. Innovation : Stay updated with the latest trends and technologies in AI and cloud infrastructure, continuously exploring new ways to enhance the platform's capabilities. Additional specifications required for the role: Bachelor’s degree in information technology, Computer Science, or Engineering. AWS Solution Architect certification – professional 5+ years of strong technical hands-on experience of delivering infrastructure and platform services across geogrphic and business boundaries. Experience of working on GPU based AI Infrastructure. Experience in NVIDIA DGX Infra will be highly preferred. Deep understanding of Architecture and Design of Platform Engineering products with focus mainly on Data science, ML/Ops and Bio science or Pharma Gen AI foundational models. Experience in NVIDIA BioNeMo or Clara will be highly preferred. Extensive experience in building infra solutions on AWS, particularly with services like AWS Bedrock, Amazon Q, SageMaker, ECS/EKS Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes. Experience with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and monitoring solutions. Excellent skills in collaborating with business users, Product team, Operationalizing the delivered products and working closely with Security for implementing compliance. Good knowledge on implementing well defined & industry standard Change management process for platform & its products. Have a well-structured Use-case onboarding process. Should ensure to have documentation for Platform products and implementations done. Experience with DevOps Orchestration/Configuration/Continuous Integration Management technologies Good understanding of High Availability and Disaster Recovery concepts for infrastructure Ability to analyze and resolve complex infrastructure resource and application deployment issues. KPI'S : Adherence to the Novartis IT quality standards Deliver on time Cost optimization Completeness and quality of deliverables Customer feedback (expectations met/exceeded) Application on boarding delivery success Actively contribute to the business with innovative solutions that show results in the form of cost optimization and / or growth of the business top line revenue. LANGUAGES : Excellent written, presentation and verbal communication skills Languages: Fluent in English (written & spoken), additional languages a plus COMPETENCY PROFILE: Technical Leadership DevOps, CI/CD Scrum Methodology Agile Software Development System integration and Built Problem solving / Root Cause Analysis Cloud services monitoring & cost optimization Why Novartis? Our purpose is to reimagine medicine to improve and extend people’s lives and our vision is to become the most valued and trusted medicines company in the world. How can we achieve this? With our people. It is our associates that drive us each day to reach our ambitions. Be a part of this mission and join us! Learn more here: https://www.novartis.com/about/strategy/people-and-culture You’ll receive: You can find everything you need to know about our benefits and rewards in the Novartis Life Handbook. https://www.novartis.com/careers/benefits-rewards Commitment to Diversity and Inclusion: Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Join our Novartis Network: If this role is not suitable to your experience or career goals but you wish to stay connected to hear more about Novartis and our career opportunities, join the Novartis Network here: https://talentnetwork.novartis.com/network Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Operations Business Unit Universal Hierarchy Node Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Technology Transformation Job Type Full time Employment Type Regular Shift Work No

Posted 2 weeks ago

Apply

3.0 years

2 - 7 Lacs

Bengaluru

On-site

Senior Machine Learning Engineer - Recommender Systems Join our team at Thomson Reuters and contribute to the global knowledge economy. Our innovative technology influences global markets and supports professionals worldwide in making pivotal decisions. Collaborate with some of the brightest minds on diverse projects to craft next-generation solutions that have a significant impact. As a leader in providing intelligent information, we value the unique perspectives that foster the advancement of our business and your professional journey. Are you excited about the opportunity to leverage your extensive technical expertise to guide a development team through the complexities of full life cycle implementation at a top-tier company? Our Commercial Engineering team is eager to welcome a skilled Senior Machine Learning Engineer to our established global engineering group. We're looking for someone enthusiastic, an independent thinker, who excels in a collaborative environment across various disciplines, and is at ease interacting with a diverse range of individuals and technological stacks. This is your chance to make a lasting impact by transforming customer interactions as we develop the next generation of an enterprise-wide experience. About the Role: As a Machine Learning Engineer, you will: Spearhead the development and technical implementation of machine learning solutions, including configuration and integration, to fulfill business, product, and recommender system objectives. Create machine learning solutions that are scalable, dependable, and secure. Craft and sustain technical outputs such as design documentation and representative models. Contribute to the establishment of machine learning best practices, technical standards, model designs, and quality control, including code reviews. Provide expert oversight, guidance on implementation, and solutions for technical challenges. Collaborate with an array of stakeholders, cross-functional and product teams, business units, technical specialists, and architects to grasp the project scope, requirements, solutions, data, and services. Promote a team-focused culture that values information sharing and diverse viewpoints. Cultivate an environment of continual enhancement, learning, innovation, and deployment. About You: You are an excellent candidate for the role of Machine Learning Engineer if you possess: At least 3 years of experience in addressing practical machine learning challenges, particularly with Recommender Systems, to enhance user efficiency, reliability, and consistency. A profound comprehension of data processing, machine learning infrastructure, and DevOps/MLOps practices. A minimum of 2 years of experience with cloud technologies (AWS SageMaker, AWS is preferred), including services, networking, and security principles. Direct experience in machine learning and orchestration, developing intricate multi-tenant machine learning products. Proficient Python programming skills, SQL, and data modeling expertise, with DBT considered a plus. Familiarity with Spark, Airflow, PyTorch, Scikit-learn, Pandas, Keras, and other relevant ML libraries. Experience in leading and supporting engineering teams. Robust background in crafting data science and machine learning solutions. A creative, resourceful, and effective problem-solving approach. #LI-HG1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 2 weeks ago

Apply

10.0 years

2 - 5 Lacs

Bengaluru

On-site

About this opportunity As a Senior Machine Learning Engineer (SMLE) , will be leading efforts for AI model deployment at scale, involving edge interfacing, ML pipeline and design of supervising and alerting systems for ML models. A specialist software engineer with experience building large-scale systems and enjoys optimizing systems and evolving them. What you will do: Lead analysis of ML-driven business needs and opportunities for Ericsson and strategic customers. Define model validation strategy and establish success criteria in data science terms. Architect and design data flow and machine learning model implementation for production deployment. Drive rapid development of minimum viable solutions and leverage existing and new data sources. Develop solutions using Generative AI and RAG approaches. Design near real-time streaming and batch applications, ensuring scalability and high availability. Conduct performance analysis, tuning, and apply best practices in architecture and design. Document solutions and support reviews; contribute to product roadmap and backlog governance. Manage system packaging, software versioning, and change management. Perform design and code reviews, focusing on security and functional requirements. Collaborate with product teams to integrate ML models into Ericsson offerings Advocate for new technologies within ML communities and mentor junior team members Build ML competency within Ericsson and contribute to cross-functional initiatives. You will bring: Proficient in Python with strong programming skills in C++/Scala/Java. Demonstrated expertise in implementing diverse machine learning techniques. Skilled in using ML frameworks such as PyTorch, TensorFlow, and Spark ML. Experience designing cloud solutions on platforms like AWS, utilizing services like SageMaker, EKS, Bedrock, and Generative AI models. Expertise in containerization and Kubernetes in cloud environments. Familiarity with Generative AI models, RAG pipelines, and vector embeddings Competent in big data storage and retrieval strategies, including indexing and partitioning. Experience with big data technologies like Spark, Kafka, MongoDB, and Cassandra. Skilled in API design and development for AI/ML models Proven experience writing production-grade software Competence in Codebase repository management like Git and any CI/CD pipelines. Extensive experience in model development and life-cycle-management in one or more industry/application domain Understanding and application of Security: Authentication and Authorization methods, SSL/TLS, Network Security (Firewall, NSG rules, Virtual Network, Subnet, Private Endpoint etc), Data Privacy handling and protection. Degree in Computer Science, Data Science, AI, Machine Learning, Electrical Engineering, or related fields from a reputable institution (Bachelor's, Master's, or Ph.D.) 10+ years of overall industry experience with 5+ years of experience in AI/ML domain

Posted 2 weeks ago

Apply

6.0 - 8.0 years

20 - 30 Lacs

Thāne

On-site

Key Responsibilities: Develop and Fine-Tune LLMs (e.g., GPT-4, Claude, LLaMA, Mistral, Gemini) using instruction tuning, prompt engineering, chain-of-thought prompting, and fine-tuning techniques. Build RAG Pipelines: Implement Retrieval-Augmented Generation solutions leveraging embeddings, chunking strategies, and vector databases like FAISS, Pinecone, Weaviate, and Qdrant. Implement and Orchestrate Agents: Utilize frameworks like MCP, OpenAI Agent SDK, LangChain, LlamaIndex, Haystack, and DSPy to build dynamic multi-agent systems and serverless GenAI applications. Deploy Models at Scale: Manage model deployment using HuggingFace, Azure Web Apps, vLLM, and Ollama, including handling local models with GGUF, LoRA/QLoRA, PEFT, and Quantization methods. Integrate APIs: Seamlessly integrate with APIs from OpenAI, Anthropic, Cohere, Azure, and other GenAI providers. Ensure Security and Compliance: Implement guardrails, perform PII redaction, ensure secure deployments, and monitor model performance using advanced observability tools. Optimize and Monitor: Lead LLMOps practices focusing on performance monitoring, cost optimization, and model evaluation. Work with AWS Services: Hands-on usage of AWS Bedrock, SageMaker, S3, Lambda, API Gateway, IAM, CloudWatch, and serverless computing to deploy and manage scalable AI solutions. Contribute to Use Cases: Develop AI-driven solutions like AI copilots, enterprise search engines, summarizers, and intelligent function-calling systems. Cross-functional Collaboration: Work closely with product, data, and DevOps teams to deliver scalable and secure AI products. Required Skills and Experience: Deep knowledge of LLMs and foundational models (GPT-4, Claude, Mistral, LLaMA, Gemini). Strong expertise in Prompt Engineering, Chain-of-Thought reasoning, and Fine-Tuning methods. Proven experience building RAG pipelines and working with modern vector stores ( FAISS, Pinecone, Weaviate, Qdrant ). Hands-on proficiency in LangChain, LlamaIndex, Haystack, and DSPy frameworks. Model deployment skills using HuggingFace, vLLM, Ollama, and handling LoRA/QLoRA, PEFT, GGUF models. Practical experience with AWS serverless services: Lambda, S3, API Gateway, IAM, CloudWatch. Strong coding ability in Python or similar programming languages. Experience with MLOps/LLMOps for monitoring, evaluation, and cost management. Familiarity with security standards: guardrails, PII protection, secure API interactions. Use Case Delivery Experience: Proven record of delivering AI Copilots, Summarization engines, or Enterprise GenAI applications. Experience 6-8 years of experience in AI/ML roles, focusing on LLM agent development, data science workflows, and system deployment. Demonstrated experience in designing domain-specific AI systems and integrating structured/unstructured data into AI models. Proficiency in designing scalable solutions using LangChain and vector databases. Job Type: Full-time Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Benefits: Health insurance Schedule: Monday to Friday Work Location: In person

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Job Title: AI/ML Engineer Experience Required: 3–4 Years Location: Indore-Onsite Department: Artificial Intelligence / Machine Learning Reports To: CTO / Head of AI About the role: We are seeking a highly skilled and innovative AI/ML Engineer with 3–4 years of hands-on experience in building and deploying AI-powered solutions. The ideal candidate should have a strong foundation in industrial automation , computer vision , LLM-based applications , and be proficient in modern AI tools such as LangChain , LangGraph , Vision Transformers , and AWS SageMaker MLOps . Experience in multi-agent chat systems and RAG architectures is a plus. Key Responsibilities: Design, train, and deploy ML models for industrial automation using OpenCV and deep learning. Develop multi-agent chat applications with LLMs, React-based agents, and contextual memory. Implement Vision Transformers (ViTs) for advanced computer vision tasks. Build intelligent conversational systems using LangChain , LangGraph , RAG , and vector databases. Fine-tune pre-trained LLMs for specific enterprise applications. Collaborate with frontend teams to integrate React-based UIs with AI backends. Deploy and manage AI solutions on AWS (SageMaker, Lambda, S3, EC2) . Maintain performance, scalability, and reliability in production-grade AI systems. Required Skills: 3–4 years of AI/ML engineering experience with a focus on real-world applications. Strong command of Python , and libraries like PyTorch , TensorFlow , Scikit-learn . In-depth knowledge of LLMs (e.g., GPT, Claude, LLaMA), prompt engineering , and fine-tuning . Proficiency in LangChain , LangGraph , and RAG-based architectures . Experience with Vision Transformers , YOLO , Detectron2 , and related CV tools. Ability to build and connect intelligent UIs using React + backend AI systems . Hands-on experience with AWS services (SageMaker, Lambda, EC2, S3). Familiarity with CI/CD workflows for ML models and production deployments. Preferred Qualifications: Exposure to edge AI , NVIDIA Jetson , or industrial IoT integrations . Experience building AI-powered chatbots with memory and tool integrations . Working knowledge of Docker , MLflow , or DVC for model versioning and containerization. Contributions to open-source AI/ML projects or research publications. Send your applications to vishal.bhat@moreyeahs.com or Contact - + 91-9644334475 Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team Come be a part of something big. If you want to be a part of building something big that will drive value throughout the entire global organization, then this is the opportunity for you. You will be working on top priority initiatives that span new and existing technologies - all to deliver outstanding results and experiences for our customers and employees. The Enterprise Data Services organization in Business Technology takes pride in enabling data driven business outcomes to spearhead Workday’s growth through trusted data excellence, innovation and architecture thought leadership. Our organization is responsible for developing and supporting Data Warehousing, Data Ingestion and Integration Services, Master Data Management (MDM), Data Quality Assurance, and the deployment of cutting-edge Advanced Analytics and Machine Learning solutions tailored to enhance multiple business sectors such as Sales, Marketing, Services, Support, and Customer Engagement. Our team harnesses the power of top-tier modern cloud platforms and services, including AWS, Databricks, Snowflake, Reltio, Tableau, Snaplogic, and MongoDB, complemented by a suite of AWS-native technologies like Spark, Airflow, Redshift, Sagemaker, and Kafka. These tools are pivotal in our drive to create robust data ecosystems that empower our business operations with precision and scalability. EDS is a global team distributed across the U.S, India and Canada. About The Role Join a pioneering organization at the forefront of technological advancement, dedicated to demonstrating data-driven insights to transform industries and drive innovation. We are actively seeking a skilled Data Platform and Support Engineer who will play a pivotal role in ensuring the smooth functioning of our data infrastructure, enabling self-service analytics, and empowering analytical teams across the organization. As a Data Platform and Support Engineer, you will oversee the management of our enterprise data hub, working alongside a team of dedicated data and software engineers to build and maintain a robust data ecosystem that drives decision-making at scale for internal analytical applications. You will play a key role in ensuring the availability, reliability, and performance of our data infrastructure and systems. You will be responsible for monitoring, maintaining, and optimizing data systems, providing technical support, and implementing proactive measures to enhance data quality and integrity. This role requires advanced technical expertise, problem-solving skills, and a strong commitment to delivering high-quality support services. The team is responsible for supporting Data Services, Data Warehouse, Analytics, Data Quality and Advanced Analytics/ML for multiple business functions including Sales, Marketing, Services, Support and Customer Experience. We demonstrate leading modern cloud platforms like AWS, Reltio, Snowflake,Tableau, Snaplogic, MongoDB in addition to the native AWS technologies like Spark, Airflow, Redshift, Sagemaker and Kafka. Job Responsibilities : Monitor the health and performance of data systems, including databases, data warehouses, and data lakes. Conduct root cause analysis and implement corrective actions to prevent recurrence of issues. Manage and optimize data infrastructure components such as servers, storage systems, and cloud services. Develop and implement data quality checks, validation rules, and data cleansing procedures. Implement security controls and compliance measures to protect sensitive data and ensure regulatory compliance. Design and implement data backup and recovery strategies to safeguard data against loss or corruption. Optimize the performance of data systems and processes by tuning queries, optimizing storage, and improving ETL pipeline efficiency. Maintain comprehensive documentation, runbooks, and fix guides for data systems and processes. Collaborate with multi-functional teams, including data engineers, data scientists, business analysts, and IT operations. Lead or participate in data-related projects, such as system migrations, upgrades, or expansions. Deliver training and mentorship to junior team members, sharing knowledge and standard methodologies to support their professional development. Participate in rotational shifts, including on-call rotations and coverage during weekends and holidays as required, to provide 24/7 support for data systems, responding to and resolving data-related incidents in a timely manner Hands-on experience with source version control, continuous integration and experience with release/organizational change delivery tools. About You Basic Qualifications: 6+ years of experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business. BE/Masters in computer science or equivalent is required Other Qualifications: Prior experience with CRM systems (e.g. Salesforce) is desirable Experience building analytical solutions to Sales and Marketing teams. Should have experience working on Snowflake ,Fivetran DBT and Airflow Experience with very large-scale data warehouse and data engineering projects. Experience developing low latency data processing solutions like AWS Kinesis, Kafka, Spark Stream processing. Should be proficient in writing advanced SQLs, Expertise in performance tuning of SQLs Experience working with AWS data technologies like S3, EMR, Lambda, DynamoDB, Redshift etc. Solid experience in one or more programming languages for processing of large data sets, such as Python, Scala. Ability to create data models, STAR schemas for data consuming. Extensive experience in troubleshooting data issues, analyzing end to end data pipelines and working with users in resolving issues Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! , Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Guindy, Tamil Nadu, India

On-site

Linkedin logo

Company Description Bytezera is a data services provider that specialise in AI and data solutions to help businesses maximise their data potential. With expertise in data-driven solution design, machine learning, AI, data engineering, and analytics, we empower organizations to make informed decisions and drive innovation. Our focus is on using data to achieve competitive advantage and transformation. About the Role We are seeking a highly skilled and hands-on AI Engineer to drive the development of cutting-edge AI applications using the latest in Computer vision, STT, Large Language Models (LLMs) , agentic frameworks , and Generative AI technologies . This role covers the full AI development lifecycle—from data preparation and model training to deployment and optimization—with a strong focus on NLP and open-source foundation models . You will be directly involved in building and deploying goal-driven, autonomous AI agents and scalable AI systems for real-world use cases. Key Responsibilities Computer Vision Development Design and implement advanced computer vision models for object detection, image segmentation, tracking, facial recognition, OCR, and video analysis. Fine-tune and deploy vision models using frameworks like PyTorch, TensorFlow, OpenCV, Detectron2, YOLO, MMDetection , etc. Optimize inference pipelines for real-time vision processing across edge devices, GPUs, or cloud-based systems. Speech-to-Text (STT) System Development Build and fine-tune ASR (Automatic Speech Recognition) models using toolkits such as Whisper, NVIDIA NeMo, DeepSpeech, Kaldi, or wav2vec 2.0 . Develop multilingual and domain-specific STT pipelines optimized for real-time transcription and high accuracy. Integrate STT into downstream NLP pipelines or agentic systems for transcription, summarization, or intent recognition. LLM and Agentic AI Design & Development Build and deploy advanced LLM-based AI agents using frameworks such as LangGraph , CrewAI , AutoGen , and OpenAgents . Fine-tune and optimize open-source LLMs (e.g., GPT-4 , LLaMA 3 , Mistral , T5 ) for domain-specific applications. Design and implement retrieval-augmented generation (RAG) pipelines with vector databases like FAISS , Weaviate , or Pinecone . Develop NLP pipelines using Hugging Face Transformers , spaCy , and LangChain for various text understanding and generation tasks. Leverage Python with PyTorch and TensorFlow for training, fine-tuning, and evaluating models. Prepare and manage high-quality datasets for model training and evaluation. Experience & Qualifications 2+ years of hands-on experience in AI engineering , machine learning , or data science roles. Proven track record in building and deploying computer vision and STT AI application . Experience with agentic workflows or autonomous AI agents is highly desirable. Technical Skills Languages & Libraries:Python, PyTorch, TensorFlow, Hugging Face Transformers, LangChain, spaCy LLMs & Generative AI:GPT, LLaMA 3, Mistral, T5, Claude, and other open-source or commercial models Agentic Tooling:LangGraph, CrewAI, AutoGen, OpenAgents Vector databases (Pinecone or ChromaDB) DevOps & Deployment: Docker, Kubernetes, AWS (SageMaker, Lambda, Bedrock, S3) Core ML Skills: Data preprocessing, feature engineering, model evaluation, and optimization Qualifications:Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Show more Show less

Posted 2 weeks ago

Apply

Exploring Sagemaker Jobs in India

Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.

Top Hiring Locations in India

If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:

  • Bangalore
  • Hyderabad
  • Pune
  • Mumbai
  • Chennai

Average Salary Range

The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the sagemaker field, a typical career progression may look like this:

  • Junior Sagemaker Developer
  • Sagemaker Developer
  • Senior Sagemaker Developer
  • Sagemaker Tech Lead

Related Skills

In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:

  • Machine Learning
  • Data Science
  • Python programming
  • Cloud computing (AWS)
  • Deep learning

Interview Questions

Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:

  • Basic:
  • What is Amazon SageMaker?
  • How does SageMaker differ from traditional machine learning?
  • What is a SageMaker notebook instance?

  • Medium:

  • How do you deploy a model in SageMaker?
  • Can you explain the process of hyperparameter tuning in SageMaker?
  • What is the difference between SageMaker Ground Truth and SageMaker Processing?

  • Advanced:

  • How would you handle model drift in a SageMaker deployment?
  • Can you compare SageMaker with other machine learning platforms in terms of scalability and flexibility?
  • How do you optimize a SageMaker model for cost efficiency?

Closing Remark

As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies