Home
Jobs

661 Sagemaker Jobs - Page 22

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

16.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Engineering Senior Manager Location: Bangalore-Manyata Tech Park Business & Team: The Commonwealth Bank is the leading financial institution in Australia and provides integrated financial services. This role sits within the Bankwest Technology division. Bankwest is a subsidiary of The Commonwealth Bank, and the Bankwest Technology division (BWT) is at the heart of our digital product strategy, and is responsible for the management and deployment of technology change across the organization. Our tech teams work at pace, with autonomy and local decision making, to deploy world-class solutions in the pursuit of our business strategy. You are a senior engineering leader who's passionate about people and driving great technical outcomes. We are a team of big thinkers, who are engineering the future of banking. Together we can create industry leading digital solutions that impact millions of customers. Impact &contribution: Empathetic and self-aware. You think and care deeply about how you might interact with your team, stakeholders and customers. A Mentor, harboring a passion to nurture, grow and influence those around you to think differently and always maintain a growth mindset. Innovative. You continually seek to improve the status quo for our customers. You inspire your team to do the same and remain resilient through change. Promoting quality and delivering at pace through the maximization of automation is one of the key focus area of the role. Risk Aware. You proactively identify and understand, openly discuss and act on current and future risks. Roles & responsibilities: The primary purpose of the role is to build high performing data platform and data engineering teams. Deliver data engineering solutions aligned to core concepts of data design, preparation, transformation, and load Proficient in data products and data mesh principles, including domain-driven data ownership, data-as-a-product, self-service data platforms, and federated governance Experienced in implementing data migration and transformation approach to migrate data from on premise to AWS cloud Establish a functional and seamless operating model that operates efficiently with Bankwest global. Create empowered teams with decision making autonomy by providing leadership and clear direction that empowers squad members to deliver on priorities, outcomes and tasks. Ensure the creation of a high performing and engaged direct team through effective leadership. Ensures Compliance with the bank's change and delivery frameworks and methodologies. Establish a culture of continuous improvement and introspection. Pro-actively identify and manage risks and issues. Manage an effective change pipeline by ensuring supply, demand and capability alignment. Identifies, raises and mitigates operational risks; work with stakeholders, partners and others to implement appropriate risk controls and measures. Work with bank and partner stakeholders to ensure appropriately skilled resources are in place. Drive uplift in individual squad maturity and engineering practices. Collaborate with upstream and downstream system representatives to ensure an integrated end to end solution is delivered to achieve the desired outcomes. Proactively seek opportunities for continuous improvement of data platforms to leverage existing capabilities Essential skills: 16+ years of experience in relevant field. Continuously improve the data products with best engineering solutions. AWS Cloud Experience is a pre-requisite. Strategize, Design and Implement (hands on) highly reliable and scalable data pipelines and data platforms with comprehensive test coverage on AWS Cloud using AWS cloud native services – AWS SageMaker, Redshift etc Build and implement data pipelines in distributed data platforms including warehouses, databases, data lakes and cloud Lake houses to enable data predictions and models, and reporting and visualization analysis via data integration tools and frameworks. RDBMS Experience on any prominent Database is required with strong SQL expertise. Required skills: Strategic thinking, External perspective, Product delivery Working with distributed teams, Continuous delivery and Agile practices, People leadership, Building high performing teams Development, analysis or testing Education Qualification: Bachelor’s degree or Master’s degree in Engineering in Computer Science/Information Technology If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 20/06/2025 Show more Show less

Posted 3 weeks ago

Apply

200.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Are you looking for an exciting opportunity to join a dynamic and growing team in a fast paced and challenging area? This is a unique opportunity for you to work in our team to partner with the Business to provide a comprehensive view. As Data Scientist Analyst Asset and Wealth Management Risk in our AWM Risk team you will be a senior members of our team to embark on a journey of innovation to introduce and scale up data-driven risk analytics solutions through data science and machine learning techniques to transform our operations and business processes, strengthening the core value proposition of our system and expanding flexible analytical capabilities for the growth of Newton’s platform. JPMorgan Asset & Wealth Management Risk (AWM) is seeking a dynamic Risk professional with quantitative analysis skills to join our AWM Risk Analytics team. This team, a part of AWM Risk Management, is a diverse group of innovative quantitative and market risk-oriented professionals. Our responsibility is to develop and maintain risk measurement methodologies and perform analytics calculations. We also own and continuously develop the AWM Risk System (Newton) used by AWM Risk Management and Front Office stakeholders. Job Responsibilities Work with peers and stakeholders to identify use cases and opportunities for Data Science to create value. Use your knowledge of Computer Science, Statistics, Mathematics and Data Science techniques to provide further insights into security and portfolio risk analytics. Lead continuous improvements in our adopted AI/ML and statistical technics used in our data and analytics validation process. Collaborate, design, and deliver solutions that are flexible and scalable using the firm’s approved new technologies and tools, such as AI and LLMs. Use citizen developer journey platform to find efficiencies in our processes. Contribute to the analysis of new and large data sets and assist with their onboarding, following our best practice data model and architecture using big data platforms. Contribute to the research and enhancement of the risk methodology for AWM Risk Analytics. The methodology covers sensitivity, stress, VaR, factor modeling, and Lending Value pricing for investment (market), counterparty (credit), and liquidity risk. Required Qualifications, Capabilities, And Skills Minimum 6months experience as a Data Scientist or in an adjacent quantitative role. A quantitative, technically proficient individual who is detail-oriented, able to multi-task, and work independently. Excellent communication skills. A strong understanding of statistics, applied AI/ML techniques, and a practical problem-solving mindset. Knowledge in modular programming in SQL, Python, ML, AWS Sagemaker, TensorFlow or alike. Preferred Qualifications, Capabilities, And Skills Practical experience in financial markets in a quantitative analysis/research role within Risk Management, a Front Office role, or equivalent is a plus. Knowledge of asset pricing, VaR backtesting techniques, and model performance testing is a plus. A degree in a quantitative or technology field (Economics, Maths/Statistics, Engineering, Computer Science or equivalent) is preferred. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job title : Real-World Evidence Data Scientist About The Job Our Team: Sanofi Business Operations is an internal Sanofi resource organization based in India and is setup to centralize processes and activities to support Specialty Care, Vaccines, General Medicines, CHC, CMO, and R&D, Data & Digital functions . Sanofi Business Operations strives to be a strategic and functional partner for tactical deliveries to Medical, HEVA, and Commercial organizations in Sanofi, Globally. Main Responsibilities The overall purpose and main responsibilities are listed below: Provide a high level of expertise in employing cutting-edge analytical & computational approaches to drive evidence-based pharmaceutical product development; provide scientific and technical leadership in machine learning and AI; work closely with other disciplines across Sanofi including Business Units, Digital, R&D, Biostatistics, Information Technology Systems and other Data Science partners to deliver cutting edge analysis to key business questions. Examples of Advanced Analytics activities: (1) Machine/Deep Learning to elucidate disease trajectories, patient subtypes, define underdiagnosed conditions, and unmet health needs; (2) Create a framework for generating re-usable models and insights across big-data (e.g. EHRs, claims) and rich small data sets (e.g. clinical trials, imaging); (3) Generating insights by merging diverse data streams e.g. health, surveillance, trend data, sensor, imaging; (4) Adoption of emerging technology into an analytical framework: distributed analytics, graph databases People: (1) Act as a subject matter expert in machine learning, statistical and/or modelling working on team projects; (2) Work with internal and external study lead to execute Advance Analytics projects and studies. Performance: (1) Implement and execute computational and statistical methodologies in Advanced Analytics for RWE; (2) Provide expertise and execute advanced analytics for solving problems across R&D, Medical Affairs, HEVA and Market Access Strategies and Plans Process: (1) Apply a broad array of capabilities spanning machine learning, statistics, mathematics, modelling, simulation, text-mining/NLP, data-mining to extract insights and be able to communicate and champion these efforts across the company; (2) Plan and deploy methodological standards, standardized processes, demos, and POCs for the company’s highest priority business needs; (3) Contribute to the design, development, and implementation of Sanofi’s data science architecture and ecosystem to guide decision-making and building foundational capabilities About You Experience: Around 10 years’ experience; High level proficiency in at least two or more technical or analytical languages (R, Python, etc..); experience with advanced ML techniques (neural networks/deep learning, reinforcement learning, SVM, PCA, etc.); ability to interact with a variety of large-scale data structures e.g., HDFS, SQL, NoSQL; Experience working across multiple environments (e.g. AWS, GCP, Linux) for optimizing compute and big data handling requirements; Experience with any of the following: biomedical data types/population health data/real world data/novel data streams relevant to the pharmaceutical industry; Experience with big data analytics platforms or high-level ML libraries such as H2O, SageMaker, Databricks, Keras, pyTorch, TensorFlow, Theano, DSSTNE or similar; Ability to prototype analyses and algorithms in high-level languages embracing reproducible and collaborative technology platforms (e.g. GitHub, containers, jupyter notebooks); Exposure to NLP technologies and analyses; Knowledge of some datavis technologies (ggplot2, shiny, plotly, d3, Tableau or Spotfire); Experience with probabilistic and/or functional programming languages such as Stan, Edward, Scala; Experience with advanced ML techniques (RNN, CNN, LSTM, GRU, Genetic Algorithms, Reinforcement Learning, etc.) Real-World Data (RWD): Experience with Real-World Data (RWD), demonstrated proficiency in working with diverse real-world data sources, including but not limited to: MarketScan, CPRD, TriNetX and STATinMED. Education: PhD in quantitative field such as Statistics, Biostatistics, Applied Mathematics or related field with 6 years of industry or academic experience; Relevant Master’s Degree, with 10 years of related industry or academic experience. Soft skills: Strong oral and written communication skills; ability to work and collaborate in a team environment Languages: Excellent knowledge of English language (spoken and written) Pursue progress, discover extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Scientist – Gen AI & ML Expert Experience: 5–7 Years Location: Remote Employment Type: Full Time Job Type : Permanent Job Description: We are seeking a highly skilled and experienced Senior Data Scientist with deep expertise in Machine Learning, Deep Learning, and Generative AI. This is a hands-on role ideal for professionals passionate about building scalable AI solutions and deploying them in production environments. Key Responsibilities: Design, develop, and deploy advanced ML and DL models to solve real-world business problems. Architect and implement enterprise-level Generative AI applications with a focus on performance and scalability. Collaborate with cross-functional teams including data engineers, product managers, and software developers. Perform data preprocessing, feature engineering, and model evaluation. Leverage AWS services, particularly SageMaker, for training and deployment of ML models. Maintain code quality and documentation following best practices in MLOps. Required Skills & Experience: 5–7 years of hands-on experience in Machine Learning and Deep Learning techniques. 3–4 years of direct experience developing and deploying Generative AI applications at an enterprise scale. Proficient in Python, with strong knowledge of libraries such as TensorFlow, PyTorch, scikit-learn, Hugging Face, etc. Familiarity with AWS cloud services: SageMaker experience is highly preferred. Solid understanding of data structures, algorithms, and model optimization techniques. Strong analytical, problem-solving, and communication skills. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: AI Engineer Job Type: Full-time, Contractor Location: Remote About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary Join our customer's team as an AI Engineer and play a pivotal role in shaping next-generation AI solutions. You will leverage cutting-edge technologies such as GenAI, LLMs, RAG, and LangChain to develop scalable, innovative models and systems. This is a unique opportunity for someone who is passionate about rapidly advancing their AI expertise and thrives in a collaborative, remote-first environment. Key Responsibilities Design and develop advanced AI models and algorithms using GenAI, LLMs, RAG, LangChain, LangGraph, and AI Agent frameworks. Implement, deploy, and optimize AI solutions on Amazon SageMaker. Collaborate cross-functionally to integrate AI models into existing platforms and workflows. Continuously evaluate the latest AI research and tools to ensure leading-edge technology adoption. Document processes, experiments, and model performance with clear and concise written communication. Troubleshoot, refine, and scale deployed AI solutions for efficiency and reliability. Engage proactively with the customer's team to understand business needs and deliver value-driven AI innovations. Required Skills and Qualifications Proven hands-on experience with GenAI, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) techniques. Strong proficiency in frameworks such as LangChain, LangGraph, and building/resolving AI Agents. Demonstrated expertise in deploying and managing AI/ML solutions on AWS SageMaker. Exceptional written and verbal communication skills, with the ability to explain complex concepts to diverse audiences. Ability and eagerness to rapidly learn, adapt, and apply new AI tools and techniques as the field evolves. Background in software engineering, computer science, or a related technical discipline. Strong problem-solving skills accompanied by a collaborative and proactive mindset. Preferred Qualifications Experience working with remote or distributed teams across multiple time zones. Familiarity with prompt engineering and orchestration of complex AI agent pipelines. A portfolio of successfully deployed GenAI solutions in production environments. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Senior Python Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 7 years of experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Bachelor of Engineering in Electronics or any related stream Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Senior Python Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 7 years of experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Bachelor of Engineering in Electronics or any related stream Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Senior Python Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 7 years of experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Bachelor of Engineering in Electronics or any related stream Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Sr. Backend Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. Experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. Additional Information: 1. The candidate should have a minimum of 3 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Bachelor of Engineering in Electronics or any related stream Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Senior Technical Architect – Machine Learning Solutions We are looking for a Senior Technical Architect with deep expertise in Machine Learning (ML), Artificial Intelligence (AI) , and scalable ML system design . This role will focus on leading the end-to-end architecture of advanced ML-driven platforms, delivering impactful, production-grade AI solutions across the enterprise. Key Responsibilities Lead the architecture and design of enterprise-grade ML platforms , including data pipelines, model training pipelines, model inference services, and monitoring frameworks. Architect and optimize ML lifecycle management systems (MLOps) to support scalable, reproducible, and secure deployment of ML models in production. Design and implement retrieval-augmented generation (RAG) systems, vector databases , semantic search , and LLM orchestration frameworks (e.g., LangChain, Autogen). Define and enforce best practices in model development, versioning, CI/CD pipelines , model drift detection, retraining, and rollback mechanisms. Build robust pipelines for data ingestion, preprocessing, feature engineering , and model training at scale , using batch and real-time streaming architectures. Architect multi-modal ML solutions involving NLP, computer vision, time-series, or structured data use cases. Collaborate with data scientists, ML engineers, DevOps, and product teams to convert research prototypes into scalable production services . Implement observability for ML models including custom metrics, performance monitoring, and explainability (XAI) tooling. Evaluate and integrate third-party LLMs (e.g., OpenAI, Claude, Cohere) or open-source models (e.g., LLaMA, Mistral) as part of intelligent application design. Create architectural blueprints and reference implementations for LLM APIs, model hosting, fine-tuning, and embedding pipelines . Guide the selection of compute frameworks (GPUs, TPUs), model serving frameworks (e.g., TorchServe, Triton, BentoML) , and scalable inference strategies (batch, real-time, streaming). Drive AI governance and responsible AI practices including auditability, compliance, bias mitigation, and data protection. Stay up to date on the latest developments in ML frameworks, foundation models, model compression, distillation, and efficient inference . Ability to coach and lead technical teams , fostering growth, knowledge sharing, and technical excellence in AI/ML domains. Experience managing the technical roadmap for AI-powered products , documentations ensuring timely delivery, performance optimization, and stakeholder alignment. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 8+ years of experience in software architecture , with 5+ years focused specifically on machine learning systems and 2 years in leading team. Proven expertise in designing and deploying ML systems at scale , across cloud and hybrid environments. Strong hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face, Scikit-learn). Experience with vector databases (e.g., FAISS, Pinecone, Weaviate, Qdrant) and embedding models (e.g., SBERT, OpenAI, Cohere). Demonstrated proficiency in MLOps tools and platforms : MLflow, Kubeflow, SageMaker, Vertex AI, DataBricks, Airflow, etc. In-depth knowledge of cloud AI/ML services on AWS, Azure, or GCP – including certification(s) in one or more platforms. Experience with containerization and orchestration (Docker, Kubernetes) for model packaging and deployment. Ability to design LLM-based systems , including hybrid models (open-source + proprietary), fine-tuning strategies, and prompt engineering. Solid understanding of security, compliance , and AI risk management in ML deployments. Preferred Skills Experience with AutoML , hyperparameter tuning, model selection, and experiment tracking. Knowledge of LLM tuning techniques : LoRA, PEFT, quantization, distillation, and RLHF. Knowledge of privacy-preserving ML techniques , federated learning, and homomorphic encryption Familiarity with zero-shot, few-shot learning , and retrieval-enhanced inference pipelines. Contributions to open-source ML tools or libraries. Experience deploying AI copilots, agents, or assistants using orchestration frameworks. What We Offer Joining QX Global Group means becoming part of a creative team where you can personally grow and contribute to our collective goals. We offer competitive salaries, comprehensive benefits, and a supportive environment that values work-life balance. Work Model Location: Ahmedabad Model: WFO Shift Timings: 12:30PM-10PM IST / 1:30PM -11PM IST Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You'll Be Doing… Lead development of advanced machine learning and statistical models Design scalable data pipelines using PySpark Perform data transformation and exploratory analysis using Pandas, Numpy and SQL Build, train and fine tune machine learning and deep learning models using TensorFlow and PyTorch Mentor junior engineers and lead code reviews, best practices and documentation. Designing and implementing big data, streaming AI/ML training and prediction pipelines. Translate complex business problems into data driven solutions. Promote best practices in data science, and model governance . Stay ahead with evolving technologies and guide strategic data initiatives. What We're Looking For… You'll need to have: Bachelor's degree or four or more years of work experience. Experience in Python, PySpark and SQL. Strong proficiency in Pandas, Numpy, Excel, Plotly, Matplotlib, Seaborn, ETL, AWS and Sagemaker Experience in Supervised learning models: Regression, Classification and Unsupervised learning models: Anomaly detection, clustering. Extensive experience with AWS analytics services, including Redshift, Glue, Athena, Lambda, and Kinesis. Knowledge in Deep Learning Autoencoders, CNN. RNN, LSTM, hybrid models Experience in Model evaluation, cross validation, hyper parameters tuning Familiarity with data visualization tools and techniques. Even better if you have one or more of the following: Experience with machine learning and statistical analysis. Experience in Hypothesis testing. Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders. If our company and this role sound like a fit for you, we encourage you to apply even if you don't meet every "even better" qualification listed above. #TPDRNONCDIO Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Do you want to make a global impact on patient health? Do you thrive in a fast-paced environment that integrates scientific, clinical, and commercial domains through engineering, data science, and AI. Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team of engineering, data science, and AI professionals is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team is a key driver of Pfizer’s digital transformation, leading process and engineering innovations to advance AI and data science applications from prototypes and MVPs to full production. As a Manager, AI and Data Science Full Stack Engineer , you will join the Data Science Industrialization team. Your responsibilities will include architecting and implementing AI solutions at scale for Pfizer. You will iteratively develop and continuously improve data science workflows, AI based software solutions, and AI components. Role Responsibilities Develop end-to-end data engineering, data science and analytics products and AI modules. Develop server-side logic using back-end technologies such as Python. Develop data ETL pipelines using python and SQL. Create responsive and visually appealing web interfaces using HTML, CSS, and Bootstrap. Build dynamic and interactive web applications with JavaScript frameworks (React, Vue, or AngularJS) Build data visualizations and data applications to enable data exploration and insights generation (e.g, Tableau, Power BI, Dash, Shiny, Streamlit, etc.). Implement and maintain infrastructure and tools for software development and deployment using IaC tools Automate processes for continuous integration, delivery, and deployment (CI/CD pipeline) to ensure smooth software delivery Implement logging and monitoring tools to gain insights into system behavior. Collaborate with data scientists, engineers, and colleagues from across Pfizer to integrate AI and data science models into production solutions Demonstrate a proactive approach to identifying and resolving potential system issues. Contribute to the best practices of the team and help colleagues grow. Communicate complex technical concepts and insights to both technical and non-technical stakeholders. Stay up-to-date with emerging technologies and trends in your field. Basic Qualifications Bachelor's or Master's degree in Computer Science, or a related field (or equivalent experience). 5+ years of experience in software engineering, data science, or related technical fields. Proven experience as a Full Stack Engineer or similar role, with a strong portfolio of successful projects. Solid experience in programming languages such as Python or R, and experience with relevant libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Good understanding of back-end technologies, databases (SQL and NoSQL), and RESTful APIs. Knowledge of BI backend concept like Star Schema and Snowflake Experience in building low code dashboard solution tools like Tableau, Power BI, Dash and Streamlit Highly self-motivated, capable of delivering both independently and through strong team collaboration. Ability to creatively tackle new challenges and step outside your comfort zone. Strong English communication skills (written and verbal). Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems or related discipline Experience in CI/CD integration (e.g. GitHub, GitHub Actions) and containers (e.g. docker) Experience developing dynamic and interactive web applications; familiar with React, AngularJS, Vue. Experience in creating responsive user interfaces; familiar with technologies such as HTML, Tailwind CSS, Bootstrap, Material, Vuetify Experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, or Chef Proficiency in Git for version control of infrastructure code and application code Familiarity with monitoring and observability tools such as Prometheus, Grafana, or ELK stack Knowledge of serverless computing and experience with serverless platforms like AWS Lambda. Good knowledge of data manipulation and preprocessing techniques, including data cleaning, data wrangling and feature engineering. Good understanding of statistical modeling, machine learning algorithms, and data mining techniques. Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with cloud-based analytics ecosystems (e.g., AWS, Snowflake). Hands on experience working in Agile teams, processes, and practices Ability to work non-traditional work hours interacting with global teams spanning across the different regions (eg: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Executive Director – Artificial Intelligence (AI) & Generative AI (GenAI) Leader EY-Px team is a multi-disciplinary technology team delivering client projects and solutions across key sectors and functions across the deal life cycle which helps organizations re-imagine and scale up their existing portfolios through the adoption of digital and AI/GenAI capabilities on top of strong data and cloud solution skills. These assignments cover a wide range of countries and industry sectors. The opportunity As the Executive Director of AI & GenAI at [SaT GDS], you will spearhead the integration of cutting-edge AI solutions to solve complex client challenges, driving measurable impact across revenue growth, cost optimization, and customer experience enhancement. This leadership role requires a visionary with deep technical expertise in AI/GenAI and a proven track record in consulting, enabling you to collaborate with regional partners to secure high-value engagements and deliver scalable, cross-sector solutions. Your Key Responsibilities Client Engagement & Business Development Partner with regional practice teams to identify AI-driven opportunities, craft tailored proposals, and win client engagements. Lead client workshops to diagnose pain points, design AI strategies, and articulate ROI-driven use cases (e.g., GenAI for hyper-personalization, predictive analytics for supply chain optimization). Build trusted advisor relationships with C-suite stakeholders, aligning AI initiatives with business outcomes. AI Solution Development Architect end-to-end AI solutions: ideation, data strategy, model development (ML/GenAI), MLOps, and scaling. Drive cross-sector innovation (e.g., GenAI-powered customer service automation for retail, predictive maintenance in manufacturing). Ensure ethical AI practices, governance, and compliance across deployments. Thought Leadership & Market Presence Publish insights on AI trends (e.g., multimodality, RAG architectures) Shape the AI go-to-market strategy, enhancing its reputation as a leader in transformative AI consulting. Skills And Attributes For Success Technical Expertise: Mastery of AI/GenAI lifecycle: NLP, deep learning (Transformers, GANs), cloud platforms (AWS SageMaker, Azure ML), and tools (LangChain, Hugging Face). Proficiency in Python, TensorFlow/PyTorch, and generative models (GPT, Claude, Stable Diffusion). Consulting Acumen: 15+ years in top-tier consulting, with 5+ years leading AI engagements. Expertise in stakeholder management, value storytelling, and commercial negotiation. Leadership: Track record of building high-performing teams and strong AI portfolios. Exceptional communication skills, bridging technical and executive audiences. To qualify for the role, you must have Experience of guiding teams on Projects focusing on AI/Data Science and Communicating results to clients Familiar in implementing solutions in Azure Cloud Framework Excellent Presentation Skills 18+ years of relevant work experience in developing and implementing AI, Machine Learning Models[1]experience of deployment in Azure is preferred Experience in application of statistical techniques like Linear and Non-Linear Regression/classification/optimization, Forecasting and Text analytics. Familiarity with deep learning and machine learning algorithms and the use of popular AI/ML frameworks Minimum 6 years of experience in working with NLG, LLM, DL Techniques Relevant understanding of Deep Learning and neural network techniques Expertise in implementing applications using open source and proprietary LLM models Proficient in using Langchain-type orchestrators or similar Generative AI workflow management tools Minimum of 6-9 years of programming in Python Experience with the software development life cycle (SDLC) and principles of product development Willingness to mentor team members Solid thoughtfulness, technical and problem-solving skills Excellent written and verbal communication skills Preferred Experience PhD/ MS/ Mtech/ Btech in Computer Science, Data Science, or related field. Published research/papers on AI/GenAI applications. Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of clients Willingness to travel extensively and to work on client sites / practice office locations Why Join Us Lead AI innovation at scale for Fortune 500 clients, backed by a global brand and multidisciplinary experts. Thrive in a culture of entrepreneurship, with access to proprietary datasets and emerging tech partnerships. Accelerate your career through executive visibility and equity in shaping the future of AI consulting. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations –Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success, as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Introduction: A Career at HARMAN Lifestyle We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. As a member of HARMAN Lifestyle, you connect consumers with the power of superior sound. Contribute your talents to high-end, esteemed brands like JBL, Mark Levinson and Revel Unite your passion for audio innovation with high-tech product development Create pitch-perfect, cutting-edge technology that elevates the listening experience What You Will Do To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What You Need To Be Successful Utilized various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Bonus Points if You Have Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What Makes You Eligible Bachelor’s or master’s degree in computer science, Artificial Intelligence, or a related field. 5-10 years relevant and Proven experience in developing and deploying generative AI models and agents in a professional setting. What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! HARMAN is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or Protected Veterans status. HARMAN offers a great work environment, challenging career opportunities, professional training, and competitive compensation. (www.harman.com) Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

Do you want to make a global impact on patient health? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team leads engineering efforts to advance AI and data science applications from POCs and prototypes to full production. As a Manager, AI and Analytics Data Engineer , you will be part of a global team responsible for designing, developing, and implementing robust data layers that support data scientists and key advanced analytics/AI/ML business solutions. You will develop data solutions to support our data science community and drive data-centric decision-making. Join our diverse team in making an impact on patient health through the application of cutting-edge technology and collaboration. Role Responsibilities Develop data solutions to support data scientists and analytics/AI solutions, ensuring data quality, reliability, and efficiency Conduct exploratory data analysis and quality checks Deliver scalable data pipelines that ingest and integrate data from various information sources Contribute to best practices, standards, and documentation to ensure consistency and scalability Conduct data engineering research to advance design and development capabilities Guide junior developers on concepts such as data modeling, database architecture, data pipeline management, data ops and automation, tools, and best practices Demonstrate a proactive approach to identifying and resolving potential system issues Create and maintain robust technical documentation for data solutions to enable knowledge retention and sharing Collaborate with data scientists, engineers, and colleagues from across Pfizer to integrate AI and data science models into production solutions Partner with the AIDA Data and Platforms teams to enforce best practices for data engineering and data solutions Basic Qualifications Bachelor's degree in computer science, information technology, software engineering, or a related field (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline). 5+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. Knowledge of modern data engineering frameworks and tools such as Snowflake, Redshift, Spark, Airflow, Hadoop, Kafka, and related technologies Experience working in a cloud-based analytics ecosystem (AWS, Snowflake, etc.) Understanding of Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone. Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems, or a related discipline (preferred, but not required) Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with machine learning and AI technologies and their integration with data engineering pipelines Familiarity with containerization technologies like Docker and orchestration platforms like Kubernetes. Experience working effectively in a distributed remote team environment Hands on experience working in Agile teams, processes, and practices Proficiency in using version control systems like Git. Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Ability to work non-traditional work hours interacting with global teams spanning across the different regions (e.g.: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: DevOps Engineer Location: Gurugram (On-site) Experience Required: 2–6 years Work Schedule: Monday to Friday, 10:30 AM – 8:00 PM (1st and 3rd Saturdays off) About Darwix AI Darwix AI is a next-generation Generative AI platform built for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI infrastructure processes multimodal data such as voice calls, emails, chat logs, and CCTV streams to deliver real-time contextual nudges, performance analytics, and AI-assisted coaching. Our product suite includes: Transform+: Real-time conversational intelligence for contact centers and field sales Sherpa.ai: Multilingual GenAI assistant offering live coaching, call summaries, and objection handling Store Intel: A computer vision solution converting retail CCTV feeds into actionable insights Darwix AI is trusted by leading organizations including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty. We are backed by top institutional investors and are expanding rapidly across India, the Middle East, and Southeast Asia. Key Responsibilities Design, implement, and manage scalable cloud infrastructure using AWS services such as EC2, S3, IAM, Lambda, SageMaker, and EKS Build and maintain secure, automated CI/CD pipelines using GitHub Actions, Docker, and Terraform Manage machine learning model deployment workflows and lifecycle using tools such as MLflow or DVC Deploy and monitor Kubernetes-based workloads in Amazon EKS (both managed and self-managed node groups) Implement best practices for configuration management, containerization, secrets handling, and infrastructure security Ensure system availability, performance monitoring, and failover automation for critical ML services Collaborate with data scientists and software engineers to operationalize model training, inference, and version control Contribute to Agile ceremonies and ensure DevOps alignment with sprint cycles and delivery milestones Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field 2–6 years of experience in DevOps, MLOps, or related roles Proficiency in AWS services including EC2, S3, IAM, Lambda, SageMaker, and EKS Strong understanding of Kubernetes architecture and workload orchestration in EKS environments Hands-on experience with CI/CD pipelines and GitHub Actions, including secure credential management using GitHub Secrets Strong scripting and automation skills (Python, Shell scripting) Familiarity with model versioning tools such as MLflow or DVC, and artifact storage strategies using AWS S3 Solid understanding of Agile software development practices and QA/testing workflows Show more Show less

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Job Title: Data Scientist Job Location: Jaipur Experience: 3 to 6 years Job Description: We are seeking a highly skilled and innovative Data Scientist to join our dynamic and forward-thinking team. This role is ideal for someone who is passionate about advancing the fields of Classical Machine Learning, Conversational AI, and Deep Learning Systems, and thrives on translating complex mathematical challenges into actionable machine learning models. The successful candidate will focus on developing, designing, and maintaining cutting-edge AI-based systems, ensuring seamless and engaging user experiences.Additionally, the role involves active participation in a wide variety of Natural Language Processing (NLP) tasks, including refining and optimizing prompts to enhance the performance of Large Language Models (LLMs). Key Responsibilities: • Generative AI Solutions: Develop innovative Generative AI solutions using machine learning and AI technologies, including building and fine-tuning models such as GANs, VAEs, and Transformers. • Classical ML Models: Design and develop machine learning models (regression, decision trees, SVMs, random forests, gradient boosting, clustering, dimensionality reduction) to address complex business challenges. • Deep Learning Systems: Train, fine-tune, and deploy deep learning models such as CNNs, RNNs, LSTMs, GANs, and Transformers to solve AI problems and optimize performance. • NLP and LLM Optimization: Participate in Natural Language Processing activities, refining and optimizing prompts to improve outcomes for Large Language Models (LLMs), such as GPT, BERT, and T5. • Data Management & Feature Engineering: Work with large datasets, perform data preprocessing, augmentation, and feature engineering to prepare data for machine learning and deep learning models. • Model Evaluation & Monitoring: Fine-tune models through hyperparameter optimization (grid search, random search, Bayesian optimization) to improve performance metrics (accuracy, precision, recall, F1-score). Monitor model performance to address drift, overfitting, and bias. • Code Review & Design Optimization: Participate in code and design reviews, ensuring quality and scalability in system architecture and development. Work closely with other engineers to review algorithms, validate models, and improve overall system efficiency. • Collaboration & Research: Collaborate with cross-functional teams including data scientists, engineers, and product managers to integrate machine learning solutions into production. Stay up to date with the latest AI/ML trends and research, applying cutting-edge techniques to projects. Qualifications: • Educational Background: Bachelor’s or Master’s degree in Computer Science, Mathematics, Statistics, Data Science, or any related field. • Experience in Machine Learning: Extensive experience in both classical machine learning techniques (e.g., regression, SVM, decision trees) and deep learning systems (e.g., neural networks, transformers). Experience with frameworks such as TensorFlow, PyTorch, or Keras. • Natural Language Processing Expertise: Proven experience in NLP, especially with Large Language Models (LLMs) like GPT, BERT, or T5. Experience in prompt engineering, fine-tuning, and optimizing model outcomes is a strong plus. • Programming Skills: Proficiency in Python and relevant libraries such as NumPy, Pandas, Scikit-learn, and natural language processing libraries (e.g., Hugging Face Transformers, NLTK, SpaCy). • Mathematical & Statistical Knowledge: Strong understanding of statistical modeling, probability theory, and mathematical optimization techniques used in machine learning. • Model Deployment & Automation: Experience with deploying machine learning models into production environments using platforms such as AWS SageMaker or Azure ML, GCP AI, or similar. Familiarity with MLOps practices is an advantage. • Code Review & System Design: Experience in code review, design optimization, and ensuring quality in large-scale AI/ML systems. Understanding of distributed computing and parallel processing is a plus. Soft Skills & Behavioural Qualifications: • Must be a good team player and self-motivated to achieve positive results • Must have excellent communication skills in English. • Exhibits strong presentation skills with attention to detail. • It’s essential to have a strong aptitude for learning new techniques. • Takes ownership for responsibilities. • Demonstrates a high degree of reliability, integrity, and trustworthiness • Ability to manage time, displays appropriate sense of urgency and meet/exceed all deadlines • Ability to accurately process high volumes of work within established deadlines. Interested Candidate can share your CV or Reference at sulabh.tailang@celebaltech.com Show more Show less

Posted 3 weeks ago

Apply

8.0 - 13.0 years

14 - 24 Lacs

Pune, Ahmedabad

Hybrid

Naukri logo

Senior Technical Architect Machine Learning Solutions We are looking for a Senior Technical Architect with deep expertise in Machine Learning (ML), Artificial Intelligence (AI) , and scalable ML system design . This role will focus on leading the end-to-end architecture of advanced ML-driven platforms, delivering impactful, production-grade AI solutions across the enterprise. Key Responsibilities Lead the architecture and design of enterprise-grade ML platforms , including data pipelines, model training pipelines, model inference services, and monitoring frameworks. Architect and optimize ML lifecycle management systems (MLOps) to support scalable, reproducible, and secure deployment of ML models in production. Design and implement retrieval-augmented generation (RAG) systems, vector databases , semantic search , and LLM orchestration frameworks (e.g., LangChain, Autogen). Define and enforce best practices in model development, versioning, CI/CD pipelines , model drift detection, retraining, and rollback mechanisms. Build robust pipelines for data ingestion, preprocessing, feature engineering , and model training at scale , using batch and real-time streaming architectures. Architect multi-modal ML solutions involving NLP, computer vision, time-series, or structured data use cases. Collaborate with data scientists, ML engineers, DevOps, and product teams to convert research prototypes into scalable production services . Implement observability for ML models including custom metrics, performance monitoring, and explainability (XAI) tooling. Evaluate and integrate third-party LLMs (e.g., OpenAI, Claude, Cohere) or open-source models (e.g., LLaMA, Mistral) as part of intelligent application design. Create architectural blueprints and reference implementations for LLM APIs, model hosting, fine-tuning, and embedding pipelines . Guide the selection of compute frameworks (GPUs, TPUs), model serving frameworks (e.g., TorchServe, Triton, BentoML) , and scalable inference strategies (batch, real-time, streaming). Drive AI governance and responsible AI practices including auditability, compliance, bias mitigation, and data protection. Stay up to date on the latest developments in ML frameworks, foundation models, model compression, distillation, and efficient inference . 14. Ability to coach and lead technical teams , fostering growth, knowledge sharing, and technical excellence in AI/ML domains. Experience managing the technical roadmap for AI-powered products , documentations ensuring timely delivery, performance optimization, and stakeholder alignment. Required Qualifications Bachelors or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 8+ years of experience in software architecture , with 5+ years focused specifically on machine learning systems and 2 years in leading team. Proven expertise in designing and deploying ML systems at scale , across cloud and hybrid environments. Strong hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face, Scikit-learn). Experience with vector databases (e.g., FAISS, Pinecone, Weaviate, Qdrant) and embedding models (e.g., SBERT, OpenAI, Cohere). Demonstrated proficiency in MLOps tools and platforms : MLflow, Kubeflow, SageMaker, Vertex AI, DataBricks, Airflow, etc. In-depth knowledge of cloud AI/ML services on AWS, Azure, or GCP – including certification(s) in one or more platforms. Experience with containerization and orchestration (Docker, Kubernetes) for model packaging and deployment. Ability to design LLM-based systems , including hybrid models (open-source + proprietary), fine-tuning strategies, and prompt engineering. Solid understanding of security, compliance , and AI risk management in ML deployments. Preferred Skills Experience with AutoML , hyperparameter tuning, model selection, and experiment tracking. Knowledge of LLM tuning techniques : LoRA, PEFT, quantization, distillation, and RLHF. Knowledge of privacy-preserving ML techniques , federated learning, and homomorphic encryption Familiarity with zero-shot, few-shot learning , and retrieval-enhanced inference pipelines. Contributions to open-source ML tools or libraries. Experience deploying AI copilots, agents, or assistants using orchestration frameworks.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

SystemBender Hiring: Delivery Solution Architect – Data and Artificial Intelligence (AWS) 📍 Location : Remote 🧠 Experience : 5+ years We’re seeking an experienced AWS Delivery Solution Architect to design and deliver scalable, secure, and cost-effective data & AI solutions . You'll lead solution architecture across data pipelines, analytics platforms, and AI/ML workloads , using services like S3, Redshift, Glue, SageMaker, Lambda, and more . 🔧 Key Responsibilities Architect AWS-based data lakes, warehouses, and ML solutions Build ETL/ELT workflows (Glue, Step Functions, Lambda) Lead AI/ML model deployment with SageMaker & AWS AI Services Provide technical leadership and mentor teams Ensure performance, security, and compliance (IAM, KMS, VPC) 🎯 Skills & Qualifications 5+ years as a Solution Architect in AWS data & AI projects Hands-on with AWS services: S3, Redshift, Glue, EMR, SageMaker Strong in Python, SQL, serverless & containerized architectures Certified AWS Architect / Data Analytics / ML Specialist preferred Excellent communication & leadership skills Join us to drive cloud-first, data-driven innovation! Interested can send their CV at Recruiter@systembender.com or DM me for more details! Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Primary Skill: Sagemaker, Python, LLM A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description We’re seeking a hands-on AI/ML Engineer with deep expertise in large language models, retrieval-augmented generation (RAG), and cloud-native ML development on AWS. You'll be a key driver in building scalable, intelligent learning systems powered by cutting-edge AI and robust AWS infrastructure. If you’re passionate about combining NLP, deep learning, and real-world application at scale—this is the role for you. 4+ years of specialized experience in AI/ML is required. Core Skills & Technologies LLM Ecosystem & APIs • OpenAI, Anthropic, Cohere • Hugging Face Transformers • LangChain, LlamaIndex (RAG orchestration) Vector Databases & Indexing • FAISS, Pinecone, Weaviate AWS-Native & ML Tooling • Amazon SageMaker (training, deployment, pipelines) • AWS Lambda (event-driven workflows) • Amazon Bedrock (foundation model access) • Amazon S3 (data lakes, model storage) • AWS Step Functions (workflow orchestration) • AWS API Gateway & IAM (secure ML endpoints) • CloudWatch, Athena, DynamoDB (monitoring, analytics, structured storage) Languages & ML Frameworks • Python (primary), PyTorch, TensorFlow • NLP, RAG systems, embeddings, prompt engineering What You’ll Do • Model Development & Tuning o Designs architecture for complex AI systems and makes strategic technical decisions o Evaluates and selects appropriate frameworks, techniques, and approaches o Fine-tune and deploy LLMs and custom models using AWS SageMaker o Build RAG pipelines with LlamaIndex/LangChain and vector search engines • Scalable AI Infrastructure o Architect distributed model training and inference pipelines on AWS o Design secure, efficient ML APIs with Lambda, API Gateway, and IAM • Product Integration o Leads development of novel solutions to challenging problems o Embed intelligent systems (tutoring agents, recommendation engines) into learning platforms using Bedrock, SageMaker, and AWS-hosted endpoints • Rapid Experimentation o Prototype multimodal and few-shot learning workflows using AWS services o Automate experimentation and A/B testing with Step Functions and SageMaker Pipelines • Data & Impact Analysis o Leverage S3, Athena, and CloudWatch to define metrics and continuously optimize AI performance • Cross-Team Collaboration o Work closely with educators, designers, and engineers to deliver AI features that enhance student learning o Mentors junior engineers and provides technical leadership Who You Are • Deeply Technical: Strong foundation in machine learning, deep learning, and NLP/LLMs • AWS-Fluent: Extensive experience with AWS ML services (especially SageMaker, Lambda, and Bedrock) • Product-Minded: You care about user experience and turning ML into real-world value • Startup-Savvy: Comfortable with ambiguity, fast iterations, and wearing many hats • Mission-Aligned: Passionate about education, human learning, and AI for good Bonus Points • Hands-on experience fine-tuning LLMs or building agentic systems using AWS • Open-source contributions in AI/ML or NLP communities • Familiarity with AWS security best practices (IAM, VPC, private endpoints) Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: AI Engineer Job Type: Full-time, Contractor Location: Remote About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary Join our customer's team as an AI Engineer and play a pivotal role in shaping next-generation AI solutions. You will leverage cutting-edge technologies such as GenAI, LLMs, RAG, and LangChain to develop scalable, innovative models and systems. This is a unique opportunity for someone who is passionate about rapidly advancing their AI expertise and thrives in a collaborative, remote-first environment. Key Responsibilities Design and develop advanced AI models and algorithms using GenAI, LLMs, RAG, LangChain, LangGraph, and AI Agent frameworks. Implement, deploy, and optimize AI solutions on Amazon SageMaker. Collaborate cross-functionally to integrate AI models into existing platforms and workflows. Continuously evaluate the latest AI research and tools to ensure leading-edge technology adoption. Document processes, experiments, and model performance with clear and concise written communication. Troubleshoot, refine, and scale deployed AI solutions for efficiency and reliability. Engage proactively with the customer's team to understand business needs and deliver value-driven AI innovations. Required Skills and Qualifications Proven hands-on experience with GenAI, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) techniques. Strong proficiency in frameworks such as LangChain, LangGraph, and building/resolving AI Agents. Demonstrated expertise in deploying and managing AI/ML solutions on AWS SageMaker. Exceptional written and verbal communication skills, with the ability to explain complex concepts to diverse audiences. Ability and eagerness to rapidly learn, adapt, and apply new AI tools and techniques as the field evolves. Background in software engineering, computer science, or a related technical discipline. Strong problem-solving skills accompanied by a collaborative and proactive mindset. Preferred Qualifications Experience working with remote or distributed teams across multiple time zones. Familiarity with prompt engineering and orchestration of complex AI agent pipelines. A portfolio of successfully deployed GenAI solutions in production environments. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role Description Job Title: Cloud AI/ML Engineer – Generative AI (AWS) About The Role We are seeking a skilled and forward-thinking Cloud AI/ML Engineer to lead the design, development, and support of scalable, secure, and high-performance generative AI applications on AWS . You’ll operate at the crossroads of cloud engineering and artificial intelligence, enabling rapid and reliable delivery of cutting-edge AI solutions using services like Amazon Bedrock and SageMaker . This is an opportunity to join a collaborative team driving innovation in AI infrastructure, with a strong focus on automation, security, observability, and performance optimization. Roles And Responsibilities AI/ML Integration Utilize Amazon Bedrock for leveraging foundation models and Amazon SageMaker for training and deploying custom models. Design and maintain scalable generative AI applications using AWS-native AI/ML tools and services. Deployment & Operations Build and manage CI/CD pipelines to automate infrastructure provisioning and model lifecycle workflows. Monitor infrastructure and model performance using Amazon CloudWatch and other observability tools. Ensure production-grade availability, fault tolerance, and performance of deployed AI systems. Security & Compliance Enforce security best practices using IAM, data encryption, and access control policies. Maintain compliance with relevant organizational, legal, and industry-specific data protection standards. Collaboration & Support Partner with data scientists, ML engineers, and product teams to translate requirements into resilient cloud-native solutions. Diagnose and resolve issues related to model behavior, infrastructure health, and AWS service usage. Optimization & Documentation Continuously assess and optimize model performance, infrastructure cost, and resource utilization. Document deployment workflows, architectural decisions, and operational runbooks for team-wide reference. Mentorship & Guidance Mentor peers and junior engineers by sharing knowledge of AWS services and generative AI best practices. Must-Have Skills & Experience Expertise in AWS services, particularly SageMaker, Bedrock, EC2, IAM, and related cloud-native tools. Strong coding skills in Python, with experience in developing AI applications. Hands-on experience with Docker for containerization and familiarity with Kubernetes for orchestration. Proven experience building and maintaining CI/CD pipelines for AI/ML workloads. Knowledge of data security, access control, and monitoring within cloud environments. Experience managing cloud-based data flows and infrastructure for ML workflows. Good-to-Have (Preferred) Skills AWS certifications, such as: AWS Certified Machine Learning – Specialty AWS Certified DevOps Engineer Understanding of responsible AI practices, particularly in generative model deployment. Experience in cost optimization, auto-scaling, and resource management for production AI workloads. Familiarity with tools like Terraform, CloudFormation, or Pulumi for infrastructure as code (IaC). Exposure to multi-cloud or hybrid cloud strategies involving AI/ML services. Skills Aws,Python,Docker,Kubernets Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Design, implement, and manage cloud infrastructure on AWS using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Maintain and enhance CI/CD pipelines using tools like GitHub Actions, AWS CodePipeline, Jenkins, or ArgoCD. Ensure platform reliability, scalability, and high availability across development, staging, and production environments. Automate operational tasks, environment provisioning, and deployments using scripting languages such as Python, Bash, or PowerShell. Enable and maintain Amazon SageMaker environments for scalable ML model training, hosting, and pipelines. Integrate AWS Bedrock to provide foundation model access for generative AI applications, ensuring security and cost control. Manage and publish curated infrastructure templates through AWS Service Catalogue to enable consistent and compliant provisioning. Collaborate with security and compliance teams to implement best practices around IAM, encryption, logging, monitoring, and cost optimization. Implement and manage observability tools like Amazon CloudWatch, Prometheus/Grafana, or ELK for monitoring and alerting. Support container orchestration environments using EKS (Kubernetes), ECS, or Fargate. Contribute to incident response, post-mortems, and continuous improvement of the platform operational excellence. Required Skills & Qualifications Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). 5plus years of hands on experience with AWS cloud services. Strong experience with Terraform, AWS CDK, or CloudFormation. Proficiency in Linux system administration and networking fundamentals. Solid understanding of IAM policies, VPC design, security groups, and encryption. Experience with Docker and container orchestration using Kubernetes (EKS preferred). Hands-on experience with CI/CD tools and version control (Git). Experience with monitoring, logging, and alerting systems. Strong troubleshooting skills and ability to work independently or in a team. Preferred Qualifications (Nice To Have) AWS Certification (e.g., AWS Certified DevOps Engineer, Solutions Architect Associate/Professional). Experience with serverless technologies like AWS Lambda, Step Functions, and EventBridge. Experience supporting machine learning or big data workloads on AWS. Show more Show less

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies