Home
Jobs

661 Sagemaker Jobs - Page 12

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Category Technology Experience Sr. Manager Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Senior Manager - Technical Program Management Senior Manager - Technical Program Management At Capital One India, we work in a fast paced and intellectually rigorous environment to solve fundamental business problems at scale. Using advanced analytics, data science and machine learning, we derive valuable insights about product and process design, consumer behavior, regulatory and credit risk, and more from large volumes of data, and use it to build cutting edge patentable products that drive the business forward. We’re looking for a Senior Manager - Technical Program Management (TPM) to join the Machine Learning Experience (MLX) team! The MLX team is at the forefront of how Capital One builds and deploys responsible ML models and features. We onboard and educate associates on the ML platforms and products that the whole company uses. We drive new innovation and research and we’re working to seamlessly infuse ML into the fabric of the company. The full ML experience we’re creating will enable our lines of business to focus their time and resources on advancing their specific machine learning objectives — all while continuing to deliver next-generation machine learning-driven products and services for our customers. As a Senior Manager, Technical Program Management (TPM) in the MLX team, you will execute on high priority enterprise level initiatives, and influence across our organization. Specifically, you will be partnering closely with product, engineering, data scientists, and other cross functional teams to create roadmaps, scope programs aligning them with business priorities, define milestones and success metrics, and build scalable, secure, reliable, efficient ML products and platforms. This role will be responsible for big picture thinking, presenting to executive stakeholders, and holding engineering teams accountable for overarching delivery goals. Our Senior Managers TPM have: Strong technical backgrounds (ideally building highly scalable platforms, products, or services) with the ability to proactively identify and mitigate technical risks throughout delivery life-cycle Exceptional communication and collaboration skills Excellent problem solving and influencing skills A quantitative approach to problem solving and a collaborative implementer to holistic solutions; a systems thinker Ability to simplify the technically complex and drive well-educated decisions across product, engineering, design, and data science representatives Deep focus on execution, follow-through, accountability, and results Exceptional cross-team collaboration; able to work across different functions, organizations, and reporting boundaries to get the job done. Highly tuned emotional intelligence, good listener, and deep seated empathy for teams and partners Ability to lead a program team focused on the building of enterprise Machine Learning capabilities Previous experience with machine learning (building models, deploying models, setting up cloud infrastructure and/or data pipelines) and familiarity with major ML frameworks such as XGBoost, PyTorch, AWS SageMaker, etc. Ability to manage program communications with key stakeholders at all levels across the company to enable transparency and timely information sharing Ability to serve as the connective tissue across functions, business units, bringing teams together to foster collaboration, improve decision-making, and deliver value for customers, end to end Basic Qualifications: Bachelor's degree At least 5 years of experience managing technical programs At least 5 years of experience designing and building data-intensive solutions using distributed computing At least 3 years of experience building highly scalable products & platforms Preferred Qualifications: 3+ years of experience in building distributed systems & highly available services using cloud computing services / architecture - preferably using AWS 3+ years of experience with Agile delivery 3+ years of experience delivering large and complex programs - where you own the business or technical vision, collaborate with large cross-functional teams, secure commitments on deliverables and unblock teams to land business impact 2+ years of Machine Learning experience Experience in building systems and solutions within a highly regulated environment Bachelor's degree in a related technical field (Computer Science, Software Engineering) MBA or Master’s Degree in a related technical field (Computer Science, Software Engineering) or equivalent experience PMP, Lean, Agile, or Six Sigma certification No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Details As an AI-First AI/ML Engineer, you'll be architecting and deploying intelligent systems that leverage cutting-edge AI technologies including LangChain orchestration, autonomous AI agents, and robust AWS cloud infrastructure. We are seeking expertise in modern AI/ML frameworks, agentic systems, and scalable backend development using Node.js and Python. Your AI-powered engineering approach will create sophisticated machine learning solutions that drive autonomous decision-making and solve complex business challenges at enterprise scale. About You You are an AI/ML specialist who has fully embraced AI-first development methodologies, using advanced AI tools (e.g., Copilot, ChatGPT, Claude, CodeLlama) to accelerate your machine learning workflows. You're equally comfortable building LangChain orchestration pipelines, deploying Hugging Face models, developing autonomous AI agents, and architecting scalable AWS backend systems using Node.js and Python. You move FAST - capable of shipping complete, production-ready features within 1 week cycles. You are a proactive person and a go-getter, willing to go the extra mile. You understand that modern AI engineering means creating intelligent systems that can reason, learn, and act autonomously while maintaining reliability and performance. You thrive using TDD methods, MLOps practices, and Agile methodologies while focusing on finding elegant solutions to complex AI challenges. This is a hybrid role where you'll be spending your time across 4 core functions: Internal Projects (25%) - Building and maintaining OneSeven's internal AI tools and platforms Sales Engineering (25%) - Supporting sales team with technical demos, proof-of-concepts, and client presentations AI-First Engineering and Innovation Sprints (25%) - Rapid prototyping and innovation on cutting-edge AI technologies Forward Deployed Engineering (25%) - Working directly with clients on-site or embedded in their teams to deliver solutions Qualifications Technical Requirements Core AI/ML Skills 4+ years AI/ML development experience with production deployment Fluent English required - strong written and verbal communication skills for direct client interaction Reliable workspace/internet - willing to work extra hours FAST execution mindset - must be able to ship complete features within 1 week Strong system architecture experience - designing scalable, distributed AI/ML systems Expert-level LangChain experience for AI orchestration and workflow management Hugging Face experience - transformers, model integration, and deployment Extensive AI Agent development with LangChain or Google Vertex AI Heavy AWS cloud experience, particularly with Bedrock, SageMaker, and AI/ML services Backend generalist comfortable with Node.js and Python for AI service development Agile methodologies experience, startup environment passion Independent problem-solver, team player willing to work extra hours AI Agent & LangChain Expertise (Required) LangChain framework mastery for complex AI workflow orchestration Hugging Face integration - transformers, model deployment, and API integration AI Agent architecture design with LangChain or Google Vertex AI Prompt engineering and chain-of-thought optimization Vector databases and embedding systems (Pinecone, Pgvector, Chroma) RAG pipeline development and optimization LLM integration across multiple providers (OpenAI, Anthropic, AWS Bedrock, Hugging Face) Agentic system design with memory, planning, and execution capabilities Backend & Cloud Infrastructure Heavy AWS Cloud services experience (Lambda, API Gateway, S3, RDS, SageMaker, Bedrock) System architecture design for high-scale, distributed AI/ML applications Microservices architecture and design patterns for AI systems at scale Node.js and Python backend development for AI service APIs RESTful API design and GraphQL for AI service integration Database design and management for AI data workflows Modern JavaScript/TypeScript and Python async programming MLOps & Integration CI/CD pipelines and GitHub Actions for ML model deployment Model versioning, monitoring, and automated retraining workflows Container orchestration (Docker, Kubernetes) for AI services Performance optimization for high-throughput AI systems Modern authentication and secure API design for AI endpoints API security implementation (XSS, CSRF protection) Bonus Qualifications Advanced Hugging Face experience (fine-tuning, custom models, optimization) Multi-modal AI experience (vision, audio, text processing) Advanced prompt engineering and fine-tuning experience DevOps and infrastructure as code (Terraform, CloudFormation) Database optimization for vector search and AI workloads Additional cloud platforms (Azure AI, Google Vertex AI) Knowledge graph integration and semantic reasoning Project Deliverables You'll be working on building a comprehensive AI-powered business intelligence system with autonomous agent capabilities. Key deliverables include: Core AI Agent Platform Multi-agent orchestration system with LangChain workflow management Autonomous reasoning agents with tool integration and decision-making capabilities Intelligent document processing pipeline with advanced OCR and classification Real-time AI analysis dashboard with predictive insights and recommendations Advanced AI Workflows RAG-powered knowledge synthesis with multi-source data integration Automated business process agents with approval workflows and notifications AI-driven anomaly detection with proactive alerting and response systems Intelligent API orchestration with dynamic routing and load balancing Comprehensive agent performance monitoring with usage analytics and optimization insights Integration & Deployment Systems Scalable AWS backend infrastructure with auto-scaling AI services Production MLOps pipeline with automated model deployment and monitoring Multi-tenant AI service architecture with usage tracking and billing integration Real-time AI API gateway with rate limiting and authentication Benefits/Compensation Fully Remote, Contract-based with U.S. company $4,000/mo - $8,000/mo depending on experience and project duration Company-paid PTO plan, international team of 15+ To Apply SEND YOUR RESUME IN ENGLISH, please. Include the URL of your LinkedIn profile. Include Website references, GitHub repositories, and any other online references that would highlight your prior work for the qualifications described in this role. ⚠️ AUTOMATIC DISQUALIFICATION: You will be automatically disqualified if your resume is not in English or you don't include your LinkedIn profile URL. About OneSeven Tech OneSeven Tech is a premier digital product studio serving both high-growth startups and established enterprises. We've partnered with startup clients who have collectively raised over $100M in Venture Capital, while our enterprise portfolio includes 2000+ person hospitality groups and publicly traded NASDAQ companies. Our passion lies in crafting exceptional AI-powered digital products that drive real business success. Joining OneSeven means working alongside a skillful team of consultants where you'll sharpen your AI/ML expertise, expand your capabilities, and contribute to cutting-edge solutions for industry-leading clients. OST's headquarters is in Miami, Florida, but our employees work remotely worldwide. Our 3 main locations are Miami, Mexico City, Mexico, and Buenos Aires, Argentina. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Karnataka Job ID: A2941027 Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Telecom / Network Analytics / Customer Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Proven experience with telco data including call detail records (CDRs), customer churn models, and network analytics Deep understanding of predictive modeling for customer lifetime value and usage behavior Experience working with telco clients or telco data platforms (like Amdocs, Ericsson, Nokia, AT&T etc) Proficiency in machine learning techniques, including classification, regression, clustering, and time-series forecasting Strong command of statistical techniques (e.g., logistic regression, hypothesis testing, segmentation models) Strong programming in Python or R, and SQL with telco-focused data wrangling Exposure to big data technologies used in telco environments (e.g., Hadoop, Spark) Experience working in the telecom industry across domains such as customer churn prediction, ARPU modeling, pricing optimization, and network performance analytics Strong communication skills to interface with technical and business teams Nice To Have Exposure to cloud platforms (Azure ML, AWS SageMaker, GCP Vertex AI) Experience working with telecom OSS/BSS systems or customer segmentation tools Familiarity with network performance analytics, anomaly detection, or real-time data processing Strong client communication and presentation skills Roles And Responsibilities Assist analytics projects within the telecom domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Telecom / Network Analytics / Customer Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Proven experience with telco data including call detail records (CDRs), customer churn models, and network analytics Deep understanding of predictive modeling for customer lifetime value and usage behavior Experience working with telco clients or telco data platforms (like Amdocs, Ericsson, Nokia, AT&T etc) Proficiency in machine learning techniques, including classification, regression, clustering, and time-series forecasting Strong command of statistical techniques (e.g., logistic regression, hypothesis testing, segmentation models) Strong programming in Python or R, and SQL with telco-focused data wrangling Exposure to big data technologies used in telco environments (e.g., Hadoop, Spark) Experience working in the telecom industry across domains such as customer churn prediction, ARPU modeling, pricing optimization, and network performance analytics Strong communication skills to interface with technical and business teams Nice To Have Exposure to cloud platforms (Azure ML, AWS SageMaker, GCP Vertex AI) Experience working with telecom OSS/BSS systems or customer segmentation tools Familiarity with network performance analytics, anomaly detection, or real-time data processing Strong client communication and presentation skills Roles And Responsibilities Assist analytics projects within the telecom domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Karnataka Job ID: A2941027 Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

India

On-site

Linkedin logo

The proliferation of machine log data has the potential to give organizations unprecedented real-time visibility into their infrastructure and operations. With this opportunity comes tremendous technical challenges around ingesting, managing, and understanding high-volume streams of heterogeneous data. As a Machine Learning Engineer at Sumo Logic, you will actively contribute in the design and development of innovative ML-powered product capabilities to help our customers make sense of their huge amounts of log data. This involves working through the entire feature lifecycle including ideation, dataset construction, experimental validation, prototyping, production implementation, deployment, and operations. Responsibilities Identifying and validating opportunities for the application of ML or data-driven techniques Assessing requirements and approaches for large-scale data and ML platform components Driving technical delivery through the full feature lifecycle, from idea to production and operations Helping the team design and implement extremely high-volume, fault-tolerant, scalable backend systems that process and manage petabytes of customer data. Collaborating within and beyond the team to identify problems and deliver solutions Work as a member of a team, helping the team respond quickly and effectively to business needs. Requirements B.Tech, M.Tech, or Ph.D. in Computer Science or related discipline 4-6 years of industry experience with a proven track record of ownership and delivery Experience formulating use cases as ML problems and putting ML models into production Solid grounding in core ML concepts and basic statistics Experience with software engineering of production-grade services in cloud environments handling data at large scale Desirable Cloud-based application and infrastructure deployment and management Common ML libraries (eg, scikit-learn, PyTorch) and components (eg, Airflow, MLFlow) Relevant cloud provider services (eg, AWS Sagemaker) LLM core concepts, libraries, and application design patterns Experience in multi-threaded programming and distributed systems Agile software development experience (test-driven development, iterative and incremental development) is a plus. About Us Sumo Logic, Inc., empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its SaaS analytics platform. The Sumo Logic Continuous Intelligence Platform™ helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Role – AIML Data Scientist Location : Coimbatore Mode of Interview - In Person Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus

Posted 1 week ago

Apply

2.0 years

0 Lacs

Haryana

On-site

Provectus helps companies adopt ML/AI to transform the ways they operate, compete, and drive value. The focus of the company is on building ML Infrastructure to drive end-to-end AI transformations, assisting businesses in adopting the right AI use cases, and scaling their AI initiatives organization-wide in such industries as Healthcare & Life Sciences, Retail & CPG, Media & Entertainment, Manufacturing, and Internet businesses. We are seeking a highly skilled Machine Learning (ML) Tech Lead with a strong background in Large Language Models (LLMs) and AWS Cloud services. The ideal candidate will oversee the development and deployment of cutting-edge AI solutions while managing a team of 5-10 engineers. This leadership role demands hands-on technical expertise, strategic planning, and team management capabilities to deliver innovative products at scale. Responsibilities: Leadership & Management Lead and manage a team of 5-10 engineers, providing mentorship and fostering a collaborative team environment; Drive the roadmap for machine learning projects aligned with business goals; Coordinate cross-functional efforts with product, data, and engineering teams to ensure seamless delivery. Machine Learning & LLM Expertise Design, develop, and fine-tune LLMs and other machine learning models to solve business problems; Evaluate and implement state-of-the-art LLM techniques for NLP tasks such as text generation, summarization, and entity extraction; Stay ahead of advancements in LLMs and apply emerging technologies; Expertise in multiple main fields of ML: NLP, Computer Vision, RL, deep learning and classical ML. AWS Cloud Expertise Architect and manage scalable ML solutions using AWS services (e.g., SageMaker, Lambda, Bedrock, S3, ECS, ECR, etc.); Optimize models and data pipelines for performance, scalability, and cost-efficiency in AWS; Ensure best practices in security, monitoring, and compliance within the cloud infrastructure. Technical Execution Oversee the entire ML lifecycle, from research and experimentation to production and maintenance; Implement MLOps and LLMOps practices to streamline model deployment and CI/CD workflows; Debug, troubleshoot, and optimize production ML models for performance. Team Development & Communication Conduct regular code reviews and ensure engineering standards are upheld; Facilitate professional growth and learning for the team through continuous feedback and guidance; Communicate progress, challenges, and solutions to stakeholders and senior leadership. Qualifications: Proven experience with LLMs and NLP frameworks (e.g., Hugging Face, OpenAI, or Anthropic models); Strong expertise in AWS Cloud Services; Strong experience in ML/AI, including at least 2 years in a leadership role; Hands-on experience with Python, TensorFlow/PyTorch, and model optimization; Familiarity with MLOps tools and best practices; Excellent problem-solving and decision-making abilities; Strong communication skills and the ability to lead cross-functional teams; Passion for mentoring and developing engineers.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

Chennai

On-site

Date: 4 Jun 2025 Company: Qualitest Group Country/Region: IN Key Responsibilities Design, develop, and deploy ML models and AI solutions across various domains such as NLP, computer vision, recommendation systems, time-series forecasting, etc. Perform data preprocessing, feature engineering, and model training using frameworks like TensorFlow, PyTorch, Scikit-learn, or similar. Collaborate with cross-functional teams to understand business problems and translate them into AI/ML solutions. Optimize models for performance, scalability, and reliability in production environments. Integrate ML pipelines with production systems using tools like MLflow, Airflow, Docker, or Kubernetes. Conduct rigorous model evaluation using metrics and validation techniques. Stay up-to-date with state-of-the-art AI/ML research and apply findings to enhance existing systems. Mentor junior engineers and contribute to best practices in ML engineering. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 4–8 years of hands-on experience in machine learning, deep learning, or applied AI. Proficiency in Python and ML libraries/frameworks (e.g., Scikit-learn, TensorFlow, PyTorch, XGBoost). Experience with data wrangling tools (Pandas, NumPy) and SQL/NoSQL databases. Familiarity with cloud platforms (AWS, GCP, or Azure) and ML tools (SageMaker, Vertex AI, etc.). Solid understanding of model deployment, monitoring, and CI/CD pipelines. Strong problem-solving skills and the ability to communicate technical concepts clearly.

Posted 1 week ago

Apply

0 years

12 - 18 Lacs

Coimbatore

On-site

OMS Software Solutions India Pvt Ltd www.objectivemedicalsystems.com Role: Data Scientist Objective Medical Systems (OMS) is hiring a Data Scientist to join their existing team. You will collaborate with in-house domain experts to produce innovative solutions from complex and high-dimensional datasets driven by exploratory data analysis. Apply knowledge of statistics, data modeling, data sciences, and artificial intelligence to recognize patterns, identify opportunities and make valuable discoveries. Use a flexible, analytical approach to design, develop, and evaluate predictive models. Generate and test hypotheses and be involved from ideation through delivery. Responsibilities •Assist in the full development cycle from product inception, research, and prototyping to releasein production .•Write production-quality code. •Develop novel ways of integrating, mining, and visualizing diverse, high-dimensional data sets. •Optimize underlying software infrastructure to manage, integrate, and mine the data that OMSproprietary clinical applications generate daily. •Present storyboards through rich and intuitive visualizations. •Develop and deliver presentations to communicate technical ideas and analytical findings to non-technical partners and senior leadership.Requirements •Experience in data analysis, statistical learning, and artificial intelligence.•Experience with multiple deep learning techniques such as CNN, LSTM, RNN, etc. •Experience with statistical methodologies and machine learning techniques such as neuralnetworks, graphical models, ensemble methods, and natural language processing. •Proficiency with Python. •Strong data visualization skills. •Working knowledge with one or more machine learning libraries or frameworks such as PyTorch,TensorFlow, scikit-learn. •Experience using cloud technologies such as AWS with tools such as S3, Athena, API Gateway,SageMaker, etc. •Technical proficiency and demonstrated success in scientific creativity, collaboration with others,and independent thought. •Ability to collaborate with both the India and United States-based teams and translate existingresearch into practical solutions and products. •Drive project execution and implement a robust plan for measuring success.•Lead discussions with senior clinical and operational leaders. •Master of evaluation of supervised and unsupervised techniques. Ability to evaluate data qualityand determine gaps in data or assumptions. Job Type: Full-time Pay: ₹1,250,000.00 - ₹1,800,000.00 per year Schedule: Monday to Friday US shift Work Location: In person

Posted 1 week ago

Apply

5.0 years

4 - 7 Lacs

Indore

On-site

Indore, India Job Description : Full-time, On-site Role 5+ years of experience as an AI/ML Engineer Atleast 2+ years in a leadership or team-building role Experience in building powerful, real-world applications across various domains Strong communication skills and the ability to explain complex AI concepts to business stakeholders and clients. Experience in leading small to mid-sized teams. A product-first mindset and the ability to drive initiatives from idea to production. Passion for staying at the cutting edge of AI — especially in the era of LLMs and generative AI. Work in a collaborative, high-ownership, fast-moving tech environment You’ll be setting up the AI engine for a company already trusted for its tech excellence. If you’re seeking technical autonomy, client-facing impact, and team-building ownership , this is your place. Responsibilities : Lead the design, development, and deployment of AI/ML and Deep Learning models. Hire, train, and mentor a growing team of AI/ML engineers and data scientists. Own the architecture and tech stack decisions for the AI division. Engage in client discovery calls and pre-sales conversations to convert technical insights into business opportunities. Lead PoC development, client interviews, and project planning in collaboration with internal and external stakeholders. Stay updated with the latest in AI/ML (especially LLMs, vector search, generative AI, multi-agent chatbot) and translate advancements into real-world implementations. Qualifications : Bachelor’s or Master's degree in CS/IT / AI-ML / Data Science Python, NumPy, Pandas, Scikit-learn, PySpark Deep Learning: TensorFlow, PyTorch, Keras CNNs, RNNs, Attention Mechanisms, Transformers SVD (Singular Value Decomposition), KMean, Recency weight Text embeddings, semantic search, vector DBs (FAISS, Pinecone, Weaviate) Model Context Protocol (MCP) Model Lifecycle: Data preprocessing, training, tuning, evaluation (ROC, F1, AUC, confusion matrix) Botpress / Copilot Studio workflow and integration experience Deployment: RESTful ML APIs, Docker, AWS/GCP (SageMaker, Vertex AI), model versioning Workflow Tools: MLflow, Azure ML, Airflow, Git-based CI/CD Creating Rest APIs using FastAPI framework or equivalent Experience: 5+ yrs

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Listing Detail Summary Gainwell is seeking LLM Ops Engineers and ML Ops Engineers to join our growing AI/ML team. This role is responsible for developing, deploying, and maintaining scalable infrastructure and pipelines for Machine Learning (ML) models and Large Language Models (LLMs). You will play a critical role in ensuring smooth model lifecycle management, performance monitoring, version control, and compliance while collaborating closely with Data Scientists, DevOps. Your role in our mission Core LLM Ops Responsibilities: Develop and manage scalable deployment strategies specifically tailored for LLMs (GPT, Llama, Claude, etc.). Optimize LLM inference performance, including model parallelization, quantization, pruning, and fine-tuning pipelines. Integrate prompt management, version control, and retrieval-augmented generation (RAG) pipelines. Manage vector databases, embedding stores, and document stores used in conjunction with LLMs. Monitor hallucination rates, token usage, and overall cost optimization for LLM APIs or on-prem deployments. Continuously monitor models for its performance and ensure alert system in place. Ensure compliance with ethical AI practices, privacy regulations, and responsible AI guidelines in LLM workflows. Core ML Ops Responsibilities: Design, build, and maintain robust CI/CD pipelines for ML model training, validation, deployment, and monitoring. Implement version control, model registry, and reproducibility strategies for ML models. Automate data ingestion, feature engineering, and model retraining workflows. Monitor model performance, drift, and ensure proper alerting systems are in place. Implement security, compliance, and governance protocols for model deployment. Collaborate with Data Scientists to streamline model development and experimentation. What we're looking for Bachelor's/Master’s degree in computer science, Engineering, or related fields. Strong experience with ML Ops tools (Kubeflow, MLflow, TFX, SageMaker, etc.). Experience with LLM-specific tools and frameworks (LangChain,Lang Graph, LlamaIndex, Hugging Face, OpenAI APIs, Vector DBs like Pinecone, FAISS, Weavite, Chroma DB etc.). Solid experience in deploying models in cloud (AWS, Azure, GCP) and on-prem environments. Proficient in containerization (Docker, Kubernetes) and CI/CD practices. Familiarity with monitoring tools like Prometheus, Grafana, and ML observability platforms. Strong coding skills in Python, Bash, and familiarity with infrastructure-as-code tools (Terraform, Helm, etc.).Knowledge of healthcare AI applications and regulatory compliance (HIPAA, CMS) is a plus. Strong skills in Giskard, Deepeval etc. What you should expect in this role Fully Remote Opportunity – Work from anywhere in the India Minimal Travel Required – Occasional travel opportunities (0-10%). Opportunity to Work on Cutting-Edge AI Solutions in a mission-driven healthcare technology environment. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Xebia Xebia is a trusted advisor in the modern era of digital transformation, serving hundreds of leading brands worldwide with end-to-end IT solutions. The company has experts specializing in t echnology consulting, software engineering, AI, digital products and platforms, data, cloud, intelligent automation, agile transformation, and industry digitization. In addition to providing high-quality digital consulting and state-of-the-art software development, Xebia has a host of standardized solutions that substantially reduce the time-to-market for businesses. Xebia also offers a diverse portfolio of training courses to help support forward-thinking organizations as they look to upskill and educate their workforce to capitalize on the latest digital capabilities. The company has a strong presence across 16 countries with development centres across the US, Latin America, Western Europe, Poland, the Nordics, the Middle East, and Asia Pacific. Job Title: Generative AI Engineer Exp: 5 - 9 yrs Location: Bengaluru, Chennai, Gurgaon & Pune Job Summary: We are seeking a highly skilled Generative AI Engineer with hands-on experience in developing and deploying cutting-edge AI solutions using AWS, Amazon Bedrock, and agentic AI frameworks. The ideal candidate will have a strong background in machine learning and prompt engineering, with a passion for building intelligent, scalable, and secure GenAI applications. Key Responsibilities: Design, develop, and deploy Generative AI models and pipelines for real-world use cases. Build and optimize solutions using AWS AI/ML services , including Amazon Bedrock , SageMaker, and related cloud-native tools. Develop and orchestrate Agentic AI systems , integrating autonomous agents with structured workflows and dynamic decision-making. Collaborate with cross-functional teams including data scientists, cloud engineers, and product managers to translate business needs into GenAI solutions. Implement prompt engineering, fine-tuning, and retrieval-augmented generation (RAG) techniques to optimize model performance. Ensure robustness, scalability, and compliance in GenAI workloads deployed in production environments. Required Skills & Qualifications: Strong experience with Generative AI models (e.g., GPT, Claude, Mistral, etc.) Hands-on experience with Amazon Bedrock and other AWS AI/ML services . Proficiency in building and managing Agentic AI systems using frameworks like LangChain, AutoGen, or similar. Solid understanding of cloud-native architectures and ML Ops on AWS. Proficiency in Python and relevant GenAI/ML libraries (Transformers, PyTorch, LangChain, etc.) Familiarity with security, cost, and governance best practices for GenAI on cloud. Preferred Qualifications: AWS certifications (e.g., AWS Certified Machine Learning Specialty ) Experience with LLMOps tools and vector databases (e.g., Pinecone, FAISS, Weaviate) Background in NLP, knowledge graphs, or conversational AI. Why Join Us? Work on cutting-edge AI technologies that are transforming industries. Collaborative and innovative environment. Opportunities for continuous learning and growth. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Linkedin logo

About ViewTrade: ViewTrade is the force that powers fintech and cross-border investing for financial services firms throughout the world. We provide the technology, support and brokerage services that business innovators need to quickly launch or enhance a retail, fintech, wealth, or cross boarder investing experience. Now in our third decade, our approach has helped 300+ firms – from technology startups to large banks, brokers, super apps, advisory, and wealth management – create the differentiating investment experiences their customer’s demand. With clients in over 29 countries and a team that brings decades of experience and understanding of financial services technology and services, we help our business clients deliver the investment access and financial solutions they require. Our Values: Expertise Integrity Solution Driven. Teamwork Long Term success Always winning Always learning Job Summary: Seeking an experienced Cloud Solutions Architect to design and oversee robust, scalable, secure, and cost-effective multi-cloud and hybrid (on-prem) infrastructure solutions. This role requires deep expertise in AI, particularly Generative AI workloads, and involves translating business needs into technical designs, providing technical leadership, and ensuring best practices across diverse environments . Key Responsibilities: Design and architect complex solutions across multi-cloud and hybrid (on-prem) environments (preferably AWS). Translate business/technical requirements into clear architectural designs and documentation. Develop cloud and hybrid adoption strategies and roadmaps. Architect infrastructure and data pipelines for AI/ML and Generative AI workloads (compute, storage, MLOps). Design on-premise to cloud/hybrid migration plans. Evaluate and recommend cloud services and technologies across multiple providers. Define and enforce architectural standards (security, cost, reliability). Provide technical guidance and collaborate with engineering, ops, and security teams. Architect third-party integrations and collaborate on cybersecurity initiatives. Required Qualifications: Around 12+ years IT experience with 5+ years as a Cloud/Solutions Architect role Proven experience architecting/implementing solutions on at least two major public cloud platforms (e.g., AWS, Azure, GCP). AWS preferred. Strong hybrid (on-prem) and migration experience. Demonstrated experience architecting infrastructure for AI/ML/GenAI workloads (compute, data, deployment patterns). Deep understanding of cloud networking, security, and cost optimization. Proficiency in IaC (Terraform/CloudFormation/ARM), containers (Docker/Kubernetes), serverless. Familiarity with FinOps concepts. Excellent communication and problem-solving skills. Preferred Qualifications: Experience with cloud cost management and optimization strategies (FinOps) in a multi-setup environment. Experience with specific GenAI && MLOps platforms and tools ( OpenAI, Google AI, hugging face, Github Co-pilot , AWS sagemaker, AWS Bedrock, MLflow, Kubeflow, Feast, ZenML ) Good understanding of on-premise data center architecture, infrastructure (compute, storage, networking), virtualization, and experience designing and implementing hybrid cloud solutions and on-premise to cloud migration strategies. Experience in the Fintech industry. What does ViewTradebring to the table : Opportunity to do what your current firm may be hesitant to do. An informal and self-managed work culture. Freedom to experiment with new ideas and technologies. A highly motivating work environment where you learn exponentially and grow with the organization. An opportunity to create an impact at scale. Location: GIFT CITY, Gandhinagar Experience: 12+ years We are an equal opportunity employer and all qualified applicants will receive consideration for employment withoutregard to race, color, religion,sex, disability status,or any other characteristic protected by the law. Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

General information Country India State Telangana City Hyderabad Job ID 43506 Department Development Summary Description & Requirements As an AI/ML Developer, you’ll play a pivotal role in creating and delivering cutting-edge enterprise applications and automations using Infor’s AI, RPA, and OS platform technology. Your mission will be to identify innovative use cases, develop proof of concepts (PoCs), and deliver enterprise automation solutions that elevate workforce productivity and improve business performance for our customers. Title: Software Engineer Experience: 2-5 years. Skill: AI/ML, Python, AWS, SQL Location: Hyderabad Key Responsibilities Use Case Identification: Dive deep into customer requirements and business challenges. Identify innovative use cases that can be addressed through AI/ML solutions. Data Insights: Perform exploratory data analysis on large and complex datasets. Assess data quality, extract insights, and share findings. Data Preparation: Gather relevant datasets for training and testing. Clean, preprocess, and augment data to ensure suitability for AI tasks. Model Development: Train and fine-tune AI/ML models. Evaluate performance using appropriate metrics and benchmarks, optimizing for efficiency. Integration and Deployment: Collaborate with software engineers and developers to seamlessly integrate AI/ML models into enterprise systems and applications. Handle production deployment challenges. Continuous Improvement: Evaluate and enhance the performance and capabilities of deployed AI products. Monitor user feedback and iterate on models and algorithms to address limitations and enhance user experience. Proof of Concepts (PoCs): Develop PoCs to validate the feasibility and effectiveness of proposed solutions. Showcase the art of the possible to our clients. Collaboration with Development Teams: Work closely with development teams on new use cases. Best Practices and Requirements: Collaborate with team members to determine best practices and requirements. Innovation: Contribute to our efforts in enterprise automation and cloud innovation. Key Requirements Experience: A minimum 3 years of hands-on experience in implementing AI/ML models in enterprise systems. AI/ML Concepts: In-depth understanding of supervised and unsupervised learning, reinforcement learning, deep learning, and probabilistic models. Programming Languages: Proficiency in Python or R, along with querying languages like SQL. Data Handling: Ability to work with large datasets, perform data preprocessing, and wrangle data effectively. Cloud Infrastructure: Experience with AWS Sagemaker or Azure ML for implementing ML solutions is highly preferred. Frameworks and Libraries: Familiarity with scikit-learn, Keras, TensorFlow, PyTorch, or NLTK is a plus. Analytical Skills: Strong critical thinking abilities to identify problems, formulate hypotheses, and design experiments. Business Process Understanding : Good understanding of business processes and how they can be automated. Domain Expertise: Familiarity with Demand Forecasting, Anomaly Detection, Pricing, Recommendation, or Analytics solutions. Global Project Experience: Proven track record of working with global customers on multiple projects. Customer Interaction: Experience facing customers and understanding their needs. Communication Skills: Excellent verbal and written communication skills. Analytical Mindset: Strong analytical and problem-solving skills. Collaboration: Ability to work independently and collaboratively. Educational Background: Bachelor’s or Master’s degree in Computer Science, Mathematics, Statistics, or a related field. Specialization: Coursework or specialization in AI, ML, Statistics & Probability, Deep Learning, Computer Vision, or NLP/NLU is advantageous. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage. At Infor we value your privacy that’s why we created a policy that you can read here. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

Do you want to make a global impact on patient health? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team leads engineering efforts to advance AI and data science applications from POCs and prototypes to full production. As a Senior Manager, AI and Analytics Data Engineer, you will be part of a global team responsible for designing, developing, and implementing robust data layers that support data scientists and key advanced analytics/AI/ML business solutions. You will partner with cross-functional data scientists and Digital leaders to ensure efficient and reliable data flow across the organization. You will lead development of data solutions to support our data science community and drive data-centric decision-making. Join our diverse team in making an impact on patient health through the application of cutting-edge technology and collaboration. Role Responsibilities Lead development of data engineering processes to support data scientists and analytics/AI solutions, ensuring data quality, reliability, and efficiency As a data engineering tech lead, enforce best practices, standards, and documentation to ensure consistency and scalability, and facilitate related trainings Provide strategic and technical input on the AI ecosystem including platform evolution, vendor scan, and new capability development Act as a subject matter expert for data engineering on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support for data engineering needs Train and guide junior developers on concepts such as data modeling, database architecture, data pipeline management, data ops and automation, tools, and best practices Stay updated with the latest advancements in data engineering technologies and tools and evaluate their applicability for improving our data engineering capabilities Direct data engineering research to advance design and development capabilities Collaborate with stakeholders to understand data requirements and address them with data solutions Partner with the AIDA Data and Platforms teams to enforce best practices for data engineering and data solutions Demonstrate a proactive approach to identifying and resolving potential system issues. Communicate the value of reusable data components to end-user functions (e.g., Commercial, Research and Development, and Global Supply) and promote innovative, scalable data engineering approaches to accelerate data science and AI work Basic Qualifications Bachelor's degree in computer science, information technology, software engineering, or a related field (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline). 7+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. Recognized by peers as an expert in data engineering with deep expertise in data modeling, data governance, and data pipeline management principles In-depth knowledge of modern data engineering frameworks and tools such as Snowflake, Redshift, Spark, Airflow, Hadoop, Kafka, and related technologies Experience working in a cloud-based analytics ecosystem (AWS, Snowflake, etc.) Familiarity with machine learning and AI technologies and their integration with data engineering pipelines Demonstrated experience interfacing with internal and external teams to develop innovative data solutions Strong understanding of Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone. Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems, or a related discipline (preferred, but not required) Experience in software/product engineering Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with containerization technologies like Docker and orchestration platforms like Kubernetes. Experience working effectively in a distributed remote team environment Hands on experience working in Agile teams, processes, and practices Expertise in cloud platforms such as AWS, Azure or GCP. Proficiency in using version control systems like Git. Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Ability to work non-traditional work hours interacting with global teams spanning across the different regions (e.g.: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

AI Software Engineer Job Summary: We are seeking a highly skilled and innovative AI Solutions Specialist to design, develop, and deploy AI-driven solutions that address complex business problems. The ideal candidate will work closely with cross-functional teams to understand business requirements, evaluate AI technologies, and implement end-to-end intelligent systems using machine learning, deep learning, and other AI techniques. Key Responsibilities: Lead the evolution of the Data Engineering, Machine Learning and AI capabilities through the solution lifecycle Collaborate with project teams, data science teams and other development teams to drive the technical roadmap and guide development and implementation of new data driven business solutions. Collaborate with stakeholders to identify opportunities for AI and ML applications. Design scalable AI solutions that integrate with existing infrastructure and processes. Develop, train, and optimize machine learning models and AI algorithms. Evaluate third-party AI tools and APIs for integration where applicable. Create technical specifications, architecture documents, and proof-of-concept prototypes. Work with data engineering teams to ensure data quality, accessibility, and readiness. Monitor performance of AI models in production and iterate for improvement. Ensure ethical and responsible AI practices, including explainability, bias mitigation, and privacy. Present solutions and progress to technical and non-technical stakeholders. Stay updated with AI research and industry trends to propose innovative solutions. Strong knowledge of machine learning frameworks such as TensorFlow, PyTorch, Scikit-learn. Proficiency in Python and experience with AI development libraries and tools. Understanding of cloud AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI AWS, Databricks, Snowflake, Python, Pyspark, Docker, Kubernetes, Terraform, Ansible, Prometheus, Grafana, ELK, Hadoop, Spark, Kafka, Elastic Search, SQL, NoSQL databases, Postgres, Cassandra, Salesforce). Experience with data preprocessing, feature engineering, and model evaluation techniques. Excellent problem-solving, communication, and stakeholder management skills. Experience with generative AI (e.g., GPT, diffusion models) or LLMs. Familiarity with MLOps practices and deployment pipelines (Docker, CI/CD, MLflow). Background in natural language processing (NLP), computer vision, or reinforcement learning. Publications, patents, or open-source contributions in AI/ML are a plus. Deep understanding of Junos OS architecture, features, and operational nuances. Experience: Minimum 6–8 years in data and analytics with expertise across AI, ML, data platforms, BI tools, and data engineering Experience with leading and architecting and building infrastructure to manage the Data/AI model lifecycle Deep understanding of technology trends, architectures and integrations related to Generative AI Hands-on experience with advanced analytics, predictive modelling, NLP, information retrieval, deep learning etc Communication Skills: Excellent verbal and written communication skills, with the ability to explain technical concepts clearly and concisely. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Overview of Job Role: We are looking for a skilled and motivated DevOps Engineer to join our growing team. The ideal candidate will have expertise in AWS, CI/CD pipelines, and Terraform, with a passion for building and optimizing scalable, reliable, and secure infrastructure. This role involves close collaboration with development, QA, and operations teams to streamline deployment processes and enhance system performance. Roles & Responsibilities: Leadership & Strategy Lead and mentor a team of DevOps engineers, fostering a culture of automation, innovation, and continuous improvement. Define and implement DevOps strategies aligned with business objectives to enhance scalability, security, and reliability. Collaborate with cross-functional teams, including software engineering, security, MLOps, and infrastructure teams, to drive DevOps best practices. Establish KPIs and performance metrics for DevOps operations, ensuring optimal system performance, cost efficiency, and high availability. Advocate for CPU throttling, auto-scaling, and workload optimization strategies to improve system efficiency and reduce costs. Drive MLOps adoption , integrating machine learning workflows into CI/CD pipelines and cloud infrastructure. Ensure compliance with ISO 27001 standards , implementing security controls and risk management measures. Infrastructure & Automation Oversee the design, implementation, and management of scalable, secure, and resilient infrastructure on AWS . Lead the adoption of Infrastructure as Code (IaC) using Terraform, CloudFormation, and configuration management tools like Ansible or Chef. Spearhead automation efforts for infrastructure provisioning, deployment, and monitoring to reduce manual overhead and improve efficiency. Ensure high availability and disaster recovery strategies, leveraging multi-region architectures and failover mechanisms. Manage Kubernetes (or AWS ECS/EKS) clusters , optimizing container orchestration for large-scale applications. Drive cost optimization initiatives , implementing intelligent cloud resource allocation strategies. CI/CD & Observability Architect and oversee CI/CD pipelines , ensuring seamless automation of application builds, testing, and deployments. Enhance observability and monitoring by implementing tools like CloudWatch, Prometheus, Grafana, ELK Stack, or Datadog. Develop robust logging, alerting, and anomaly detection mechanisms to ensure proactive issue resolution. Security & Compliance (ISO 27001 Implementation) Lead the implementation and enforcement of ISO 27001 security standards , ensuring compliance with information security policies and regulatory requirements. Develop and maintain an Information Security Management System (ISMS) to align with ISO 27001 guidelines. Implement secure access controls, encryption, IAM policies, and network security measures to safeguard infrastructure. Conduct risk assessments, vulnerability management, and security audits to identify and mitigate threats. Ensure security best practices are embedded into all DevOps workflows, following DevSecOps principles . Work closely with auditors and compliance teams to maintain SOC2, GDPR, and other regulatory frameworks . Required Skills and Qualifications: 5+ years of experience in DevOps, cloud infrastructure, and automation, with at least 3+ years in a managerial or leadership role . Proven experience managing AWS cloud infrastructure at scale, including EC2, S3, RDS, Lambda, VPC, IAM, and CloudFormation. Expertise in Terraform and Infrastructure as Code (IaC) principles . Strong background in CI/CD pipeline automation with tools like Jenkins, GitHub Actions, GitLab CI, or CircleCI. Hands-on experience with Docker and Kubernetes (or AWS ECS/EKS) for container orchestration. Experience in CPU throttling, auto-scaling, and performance optimization for cloud-based applications. Strong knowledge of Linux/Unix systems, shell scripting, and network configurations . Proven experience with ISO 27001 implementation , ISMS development, and security risk management. Familiarity with MLOps frameworks like Kubeflow, MLflow, or SageMaker, and integrating ML pipelines into DevOps workflows. Deep understanding of observability tools such as ELK Stack, Grafana, Prometheus, or Datadog. Strong stakeholder management, communication, and ability to collaborate across teams. Experience in regulatory compliance, including SOC2, ISO 27001, and GDPR . Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Develop and deploy ML pipelines using MLOps tools, build FastAPI-based APIs, support LLMOps and real-time inferencing, collaborate with DS/DevOps teams, ensure performance and CI/CD compliance in AI infrastructure projects. Required Candidate profile Experienced Python developer with 4–8 years in MLOps, FastAPI, and AI/ML system deployment. Exposure to LLMOps, GenAI models, containerized environments, and strong collaboration across ML lifecycle

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role – AIML Data Scientist Location : Coimbatore Mode of Interview - In Person Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We’re hiring a Senior ML Engineer (MLOps) — 3-5 yrs Location: Chennai What you’ll do Tame data → pull, clean, and shape structured & unstructured data. Orchestrate pipelines → Airflow / Step Functions / ADF… your call. Ship models → build, tune, and push to prod on SageMaker, Azure ML, or Vertex AI. Scale → Spark / Databricks for the heavy lifting. Automate everything → Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Pair up → work with engineers, architects, and business folks to solve real problems, fast. What you bring 3+ yrs hands-on MLOps (4-5 yrs total software experience). Proven chops on one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark , Python, SQL, TensorFlow / PyTorch / Scikit-learn. You debug Kubernetes in your sleep and treat Dockerfiles like breathing. You prototype with open-source first, choose the right tool, then make it scale. Sharp mind, low ego, bias for action. Nice-to-haves Sagemaker, Azure ML, or Vertex AI in production. Love for clean code, clear docs, and crisp PRs. Why Datadivr? Domain focus: we live and breathe F&B — your work ships to plants, not just slides. Small team, big autonomy: no endless layers; you own what you build. 📬 How to apply Shoot your CV + a short note on a project you shipped to careers@datadivr.com or DM me here. We reply to every serious applicant. Know someone perfect? Please share — good people know good people. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Delhi

Remote

Why NeuraFlash: At NeuraFlash, we are redefining the future of business through the power of AI and groundbreaking technologies like Agentforce. As a trusted leader in AI, Amazon, and Salesforce innovation, we craft intelligent solutions—integrating Salesforce Einstein, Service Cloud Voice, Amazon Connect, Agentforce and more—to revolutionize workflows, elevate customer experiences, and deliver tangible results. From conversational AI to predictive analytics, we empower organizations to stay ahead in an ever-evolving digital landscape with cutting-edge, tailored strategies. We are proud to be creating the future of generative AI and AI agents. Salesforce has launched Agentforce, and NeuraFlash was selected as the only partner for the private beta prior to launch. Post-launch, we've earned the distinction of being Salesforce's #1 partner for Agentforce, reinforcing our role as pioneers in this transformative space. Be part of the NeuraFlash journey and help shape the next wave of AI-powered transformation. Here, you'll collaborate with trailblazing experts who are passionate about pushing boundaries and leveraging technologies like Agentforce to create impactful customer outcomes. Whether you're developing advanced AI-powered bots, streamlining business operations, or building solutions using the latest generative AI technologies, your work will drive innovation at scale. If you're ready to make your mark in the AI space, NeuraFlash is the place for you. AS AN AWS MANAGER / SR. MANAGER, YOU WILL HAVE THE OPPORTUNITY TO EXECUTE THE FOLLOWING: Managerial Roles & Responsibilities Act as the AWS subject matter expert, providing leadership and guidance to internal teams Coach, mentor, and develop junior AWS team members while setting clear goals and expectations Conduct regular performance reviews, 1-on-1 meetings, and personal development planning Manage project resource staffing, utilization, and capacity planning while actively contributing to talent acquisition Drive hiring strategies to support organizational growth Build and lead high-performing teams, fostering a collaborative and motivated work culture Establish and oversee OKRs, ensuring alignment with business objectives Collaborate with business leaders to prioritize initiatives and drive impactful outcomes Represent the company and team in talent acquisition efforts, analyzing cultural fit and promoting an engaging and inclusive workplace Balance people management responsibilities with individual contributions as an AWS Solution/Technical Architect Technical Roles & Responsibilities Demonstrate deep expertise in AWS infrastructure, security, and compliance, ensuring alignment with best practices and regulatory requirements Architect, implement, and optimize AWS solutions, focusing on scalability, cost-efficiency, and resilience Oversee cloud governance, automation, and DevOps best practices to enhance operational efficiency Lead complex AWS projects, integrating services like Amazon Connect, Amazon Lex, AWS SageMaker, Amazon Q, and Amazon Bedrock Drive innovation in Cloud Contact Center as a Service (CCaaS) by leveraging AWS and third-party platforms (Genesys, Twilio, Avaya, NICE CX, TalkDesk, Five9, RingCentral) Ensure seamless integration between Amazon Connect and Salesforce Service Cloud (SCV BYOA, SCV Bundle) Identify and address technical challenges, proactively resolving roadblocks for teams Advocate for AWS best practices and provide technical leadership in solution design Qualifications Minimum 3 years of experience leading and managing technical teams For Sr Manager roles, experience managing Manager(s) is an added advantage AWS Solution Architect Associate Certification Strong expertise in AWS cloud architecture, security, compliance, and automation Proven track record of leading technical teams while driving innovation and efficiency Experience in the Contact Center domain , particularly with Amazon Connect Knowledge of AI/ML-driven solutions and AWS services such as SageMaker, Bedrock, and Amazon Q Experience integrating Amazon Connect with Salesforce Service Cloud models (SCV BYOA, SCV Bundle) is a plus Background in Cloud Contact Center solutions such as Genesys, Twilio, Avaya, NICE CX, TalkDesk, Five9, or RingCentral is preferred What's it like to be a part of NeuraFlash? Remote & In-Person: Whether you work out of our HQ in Massachusetts, one of our regional hubs, or you're one of over half of our NeuraFlash Family who work remotely, we're focused on keeping everyone connected and unified as one team. Travel: Get ready to pack your bags and hit the road! For certain roles, travel is an exciting part of the job, with an anticipated travel commitment of up to 25%. So, if you have a passion for adventure and don't mind a little jet-setting, this opportunity could be your ticket to exploring new places while making a positive impact on clients. Flexibility: Do you have to take the dog to the vet, pick up the kids from school, or the in-laws from the airport? We know that a perfect 9-5 isn't possible. So you have to jump out to do any of those, no problem! We build a culture of trust and understanding. We value good work not the hours in which you get it done Collaboration: You have a voice here! If you work with a team of smart people like we do, it's a no-brainer to take suggestions and feedback on how to keep NeuraFlash thriving. Our executive team holds town halls & company meetings where they address any suggestions or questions asked, no matter how big or small. Celebrate Often: We take our work seriously, but we don't take ourselves too seriously. Whether it is an arm wrestling contest, costume party, or ugly holiday sweaters our teams love to have fun. And while we work hard, we don't forget to slow down and celebrate the big things and the small things together. Location: NeuraFlash strives to provide you with the flexibility to work in the location that makes the most sense for your lifestyle. For those that prefer an office setting, this role may be based in any of our hub locations within the United States. If you prefer to work from home, we can accommodate remote locations for our employees based in the United States, anywhere within Alberta, British Columbia, or Ontario for our Canada-based employees, anywhere in India for our India-based employees, and anywhere within Colombia for our Colombia-based employees!

Posted 2 weeks ago

Apply

3.0 years

25 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+11:00) Australia/Melbourne (AEDT) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Okkular) What do you need for this opportunity? Must have skills required: Communication Skills, problem-solvers, Agentic AI, AWS services (Lambda, FastAI, LangChain, Large Language Model (LLM), Natural Language Processing (NLP), Pytorch, Sagemaker, Step Functions), Go Lang, Python Okkular is Looking for: About The Job Company Description: We are a leading provider of fashion e-commerce solutions, leveraging generative AI to empower teams with innovative tools for merchandising and product discovery. Our mission is to enhance every product page with engaging customer-centric narratives, propelling accelerated growth and revenue generation. Join us in shaping the future of online fashion retail through cutting-edge technology and unparalleled creativity within the Greater Melbourne Area. Role Description: This is a full-time remote working position in India as a Senior AI Engineer . The Senior AI Engineer will be responsible for pattern recognition, neural network development, software development, and natural language processing tasks on a daily basis. Qualifications: Proficiency in sklearn, PyTorch, and fastai for implementing algorithms and training/improving models. Familiarity with Docker, AWS cloud services like Lambda, SageMaker, Bedrock. Familiarity with Streamlit. Knowledge of LangChain, LlamaIndex, Ollama, OpenRouter, and other relevant technologies. Expertise in pattern recognition and neural networks. Experience in Agentic AI development. Strong background in Computer Science and Software Development. Knowledge of Natural Language Processing (NLP). Ability to work effectively in a fast-paced environment and collaborate with cross-functional teams. Strong problem-solving skills and attention to detail. Master’s or PhD in Computer Science, AI, or a related field is preferred, but not mandatory. Strong experience in the field is sufficient alternative. Prior experience in fashion e-commerce is advantageous. Languages: Python, Golang Engagement Type: Direct-hire Job Type : Permanent Location : Remote Working time: 2:30 PM IST to 11:30 PM IST How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

3.0 years

25 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+11:00) Australia/Melbourne (AEDT) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Okkular) What do you need for this opportunity? Must have skills required: Communication Skills, problem-solvers, Agentic AI, AWS services (Lambda, FastAI, LangChain, Large Language Model (LLM), Natural Language Processing (NLP), Pytorch, Sagemaker, Step Functions), Go Lang, Python Okkular is Looking for: About The Job Company Description: We are a leading provider of fashion e-commerce solutions, leveraging generative AI to empower teams with innovative tools for merchandising and product discovery. Our mission is to enhance every product page with engaging customer-centric narratives, propelling accelerated growth and revenue generation. Join us in shaping the future of online fashion retail through cutting-edge technology and unparalleled creativity within the Greater Melbourne Area. Role Description: This is a full-time remote working position in India as a Senior AI Engineer . The Senior AI Engineer will be responsible for pattern recognition, neural network development, software development, and natural language processing tasks on a daily basis. Qualifications: Proficiency in sklearn, PyTorch, and fastai for implementing algorithms and training/improving models. Familiarity with Docker, AWS cloud services like Lambda, SageMaker, Bedrock. Familiarity with Streamlit. Knowledge of LangChain, LlamaIndex, Ollama, OpenRouter, and other relevant technologies. Expertise in pattern recognition and neural networks. Experience in Agentic AI development. Strong background in Computer Science and Software Development. Knowledge of Natural Language Processing (NLP). Ability to work effectively in a fast-paced environment and collaborate with cross-functional teams. Strong problem-solving skills and attention to detail. Master’s or PhD in Computer Science, AI, or a related field is preferred, but not mandatory. Strong experience in the field is sufficient alternative. Prior experience in fashion e-commerce is advantageous. Languages: Python, Golang Engagement Type: Direct-hire Job Type : Permanent Location : Remote Working time: 2:30 PM IST to 11:30 PM IST How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

Exploring Sagemaker Jobs in India

Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.

Top Hiring Locations in India

If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:

  • Bangalore
  • Hyderabad
  • Pune
  • Mumbai
  • Chennai

Average Salary Range

The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the sagemaker field, a typical career progression may look like this:

  • Junior Sagemaker Developer
  • Sagemaker Developer
  • Senior Sagemaker Developer
  • Sagemaker Tech Lead

Related Skills

In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:

  • Machine Learning
  • Data Science
  • Python programming
  • Cloud computing (AWS)
  • Deep learning

Interview Questions

Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:

  • Basic:
  • What is Amazon SageMaker?
  • How does SageMaker differ from traditional machine learning?
  • What is a SageMaker notebook instance?

  • Medium:

  • How do you deploy a model in SageMaker?
  • Can you explain the process of hyperparameter tuning in SageMaker?
  • What is the difference between SageMaker Ground Truth and SageMaker Processing?

  • Advanced:

  • How would you handle model drift in a SageMaker deployment?
  • Can you compare SageMaker with other machine learning platforms in terms of scalability and flexibility?
  • How do you optimize a SageMaker model for cost efficiency?

Closing Remark

As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies