Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
karnataka
On-site
The team at Walmart Global Tech builds reusable technologies to assist in acquiring customers, onboarding and empowering merchants, and ensuring a seamless experience for all stakeholders. They focus on optimizing tariffs and assortment while maintaining Walmart's philosophy of Everyday Low Cost. Additionally, the team creates personalized omnichannel experiences for customers across various platforms. The Marketplace team serves as the gateway for Third-Party sellers, enabling them to manage their onboarding, catalog, orders, and returns. They are responsible for designing, developing, and operating large-scale distributed systems using cutting-edge technologies. As a Staff Data Scientist at Walmart Global Tech, you will be responsible for developing scalable end-to-end data science solutions for data products. You will collaborate with data engineers and analysts to build ML- and statistics-driven data quality workflows. Your role will involve solving business problems by scaling advanced Machine Learning algorithms on large datasets. You will own the MLOps lifecycle, from data monitoring to model lifecycle management. Additionally, you will demonstrate thought leadership by consulting with product and business stakeholders to deploy machine learning solutions. The ideal candidate should have knowledge of machine learning and statistics, experience with web service standards, and proficiency in architecting solutions with Continuous Integration and Continuous Delivery. Strong coding skills in Python, experience with Big Data technologies like Hadoop and Spark, and the ability to work in a big data ecosystem are preferred. Candidates should have experience in developing and deploying machine learning solutions and collaborating with data scientists. Effective communication skills and the ability to present complex ideas clearly are also desired. Educational qualifications in Computer Science, Statistics, Engineering, or related fields are preferred. Candidates with prior experience in Delivery Promise Optimization or Supply Chain domains and hands-on experience in Spark or similar frameworks will be given preference. At Walmart Global Tech, you will have the opportunity to work in a dynamic environment where your contributions can impact millions of people. The team values innovation, collaboration, and continuous learning. Join us in reimagining the future of retail and making a difference on a global scale. Walmart Global Tech offers a hybrid work model that combines in-office and virtual presence. The company provides competitive compensation, incentive awards, and a range of benefits including maternity leave, health benefits, and more. Walmart fosters a culture of belonging where every associate is valued and respected. The company is committed to creating opportunities for all associates, customers, and suppliers. As an Equal Opportunity Employer, Walmart believes in understanding, respecting, and valuing the uniqueness of individuals while promoting inclusivity.,
Posted 1 day ago
4.0 - 8.0 years
4 - 8 Lacs
Chennai, Tamil Nadu, India
On-site
In this role, you will: Establish, implement and maintain risk standards and programs to drive compliance with federal, state, agency, legal and regulatory and Corporate Policy requirements Oversee the Front Line's execution and challenges appropriately on compliance related decisions Provide oversight and monitoring of risk-based compliance programs Develop and oversee standards Provide subject matter expertise with comprehensive knowledge of business and functional area Provide compliance risk expertise and consulting for projects and initiatives with moderate risk for a business line or functional area Monitor reporting, escalation, and timely remediation of issues, deficiencies or regulatory matters regarding compliance risk management Provide direction to the business on developing corrective action plans and effectively managing regulatory change Provide compliance risk expertise Consult for projects and initiatives with moderate risk for a business line Identify and recommend opportunities for process improvement and risk control development Provide direction to the business on developing corrective action plans and effectively managing regulatory change Report findings and make recommendations to management appropriate committees Interpret policies, procedures, and compliance requirements Collaborate and consult with peers, colleagues and managers to resolve issues and achieve goals Work with complex business units, rules and regulations on moderate risk compliance matters Receive direction from leaders and exercise independent judgment while developing the knowledge to understand function, policies, procedures, and compliance requirements Required Qualifications: 4+ years of Compliance experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Model Validation or Governance Background Working knowledge of Data / Privacy laws and Fair Credit Reporting Act (FCRA) requirements Job Expectations: Provide governance over in-scope consumer facing Models, which includes the review of the model to determine if there are FCRA or Data Concerns Solicit feedback from Legal, Business Aligned Compliance Officers and Horizontal Compliance functions during the review of Models Prepare and archive model review documentation Review initial Model Classification, Risk Rank and Risk Rank Overrides and Risk Rank Changes Review and Challenge Model Validation Results Participate in Model Monitoring Programs and any Required Quarterly/Annual Monitoring Reviews
Posted 4 days ago
5.0 - 10.0 years
0 Lacs
maharashtra
On-site
You are a highly skilled and motivated Lead Data Scientist / Machine Learning Engineer sought to join a team pivotal in the development of a cutting-edge reporting platform. This platform is designed to measure and optimize online marketing campaigns effectively. Your role will involve focusing on data engineering, ML model lifecycle, and cloud-native technologies. You will be responsible for designing, building, and maintaining scalable ELT pipelines, ensuring high data quality, integrity, and governance. Additionally, you will develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experimenting with different algorithms and leveraging various models will be crucial in driving insights and recommendations. Furthermore, you will deploy and monitor ML models in production and implement CI/CD pipelines for seamless updates and retraining. You will work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translating complex model insights into actionable business recommendations and presenting findings to stakeholders will also be part of your responsibilities. Qualifications & Skills: Educational Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or related field. - Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: - Experience: 5-10 years with the mentioned skillset & relevant hands-on experience. - Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). - ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. - Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. - Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. - MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). - Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: - Experience with Graph ML, reinforcement learning, or causal inference modeling. - Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. - Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. - Experience with distributed computing frameworks (Spark, Dask, Ray). Location: - Bengaluru Brand: - Merkle Time Type: - Full time Contract Type: - Permanent,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an experienced AI Architect, you will be responsible for leading the development of AI and Machine Learning infrastructure and specialized language models for HR applications. Your role will involve establishing and leading MLOps practices and driving the creation of scalable, production-ready AI/ML systems. You will collaborate with business teams to discuss the feasibility of AI/ML use cases and translate the vision of business leaders into realistic technical implementations. Your expertise will be crucial in defining the AI architecture, selecting appropriate technologies, designing and implementing robust ML infrastructure and deployment pipelines, and establishing comprehensive MLOps practices for model training, versioning, and deployment. In this role, you will lead the development of HR-specialized language models (SLMs), implement model monitoring, observability, and performance optimization frameworks, and develop fine-tuning strategies for large language models. You will create and maintain data quality assessment and validation processes, design model versioning systems and A/B testing frameworks, and define technical standards and best practices for AI development while optimizing infrastructure for cost, performance, and scalability. To be successful in this position, you should have 7+ years of experience in ML/AI engineering or related technical roles, with at least 3 years of hands-on experience in MLOps and production ML systems. Your expertise should include fine-tuning and adapting foundation models, knowledge of model serving infrastructure and orchestration, proficiency with MLOps tools such as MLflow, Kubeflow, Weights & Biases, and experience in implementing model versioning and A/B testing frameworks. Proficiency in Python and ML frameworks like PyTorch, TensorFlow, Hugging Face, as well as experience with cloud-based ML platforms such as AWS, Azure, Google Cloud, are essential. A proven track record of deploying ML models at scale, developing AI applications for enterprise software domains, and knowledge of distributed training techniques and infrastructure will be beneficial. Familiarity with retrieval-augmented generation (RAG) systems, vector databases like Pinecone, Weaviate, Milvus, and understanding of responsible AI practices and bias mitigation are also desired. A Bachelor's or Master's degree in Computer Science, Machine Learning, or a related field is required for this position.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an experienced machine learning professional with expertise in deep learning and statistical modeling for earth observation data applications, your key responsibilities will include developing models for segmentation, object classification, crop monitoring, and disaster management. It is essential for you to have proficiency in Python, ML frameworks such as TensorFlow and PyTorch, and strong research skills. An advanced degree is required for this role. SatSure is a deep tech, decision intelligence company that focuses on agriculture, infrastructure, and climate action in the developing world. The company aims to make insights from earth observation data accessible to all. Joining SatSure means being at the forefront of building a deep tech company from India that aims to solve global problems. In this role, you will be responsible for implementing deep-learning models and frameworks for various applications in agriculture and infrastructure using earth observation data. You will design and implement performance evaluation frameworks, explore new methods for improving performance, and build model monitoring and active learning pipelines. Additionally, you will work on solving research problems related to land use and land cover (LULC), crop classification, change monitoring, and disaster management using earth observations. To qualify for this role, you should have a master's degree or PhD in Engineering, Statistics, Mathematics, or Computer Science with 5-8 years of experience in machine learning-based data product research and development. Proficiency in tools and frameworks related to computer science, probability, statistics, learning theory, and deep learning fundamentals is required. Experience with statistical modeling techniques and deep learning models like CNN, RNN, Transformers, and GANs is necessary. You should also be proficient in Python and have hands-on experience in using ML frameworks such as TensorFlow, PyTorch, PyTorch Lightning, and MxNet. Additional qualifications include the ability to design, develop, and optimize machine learning/deep learning models, knowledge of applying ML to earth observation data, and an appetite for research and implementing innovative solutions for complex problems. In addition to a challenging and rewarding work environment, SatSure offers benefits such as medical health cover for you and your family, access to mental health experts, dedicated allowances for learning and skill development, and a comprehensive leave policy. The interview process includes an introductory call, assessment, presentation, multiple interview rounds, and a culture/HR round. Join SatSure to be a part of a dynamic team that is making a positive impact on the world through innovative technology and deep learning solutions.,
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled Data Engineer to join our team, working on end-to-end data engineering and data science use cases. The ideal candidate will have strong expertise in Python or Scala, Spark (Databricks), and SQL, building scalable and efficient data pipelines on Azure. Responsibilities include designing, building, and maintaining scalable ETL/ELT data pipelines using Azure Data Factory, Databricks, and Spark. Developing and optimizing data workflows using SQL and Python or Scala for large-scale data processing and transformation. Implementing performance tuning and optimization strategies for data pipelines and Spark jobs to ensure efficient data handling. Collaborating with data engineers to support feature engineering, model deployment, and end-to-end data engineering workflows. Ensuring data quality and integrity by implementing validation, error-handling, and monitoring mechanisms. Working with structured and unstructured data using technologies such as Delta Lake and Parquet within a Big Data ecosystem. Contributing to MLOps practices, including integrating ML pipelines, managing model versioning, and supporting CI/CD processes. Primary Skills required are Data Engineering & Cloud proficiency in Azure Data Platform (Data Factory, Databricks), strong skills in SQL and either Python or Scala for data manipulation, experience with ETL/ELT pipelines and data transformations, familiarity with Big Data technologies (Spark, Delta Lake, Parquet), expertise in data pipeline optimization and performance tuning, experience in feature engineering and model deployment, strong troubleshooting and problem-solving skills, experience with data quality checks and validation. Nice-to-Have Skills include exposure to NLP, time-series forecasting, and anomaly detection, familiarity with data governance frameworks and compliance practices, basics of AI/ML like ML & MLOps Integration, experience supporting ML pipelines with efficient data workflows, knowledge of MLOps practices (CI/CD, model monitoring, versioning). At Tesco, we are committed to providing the best for our colleagues. Total Rewards offered at Tesco are determined by four principles - simple, fair, competitive, and sustainable. Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays. Tesco promotes programs supporting health and wellness, including insurance for colleagues and their family, mental health support, financial coaching, and physical wellbeing facilities on campus. Tesco in Bengaluru is a multi-disciplinary team serving customers, communities, and the planet. The goal is to create a sustainable competitive advantage for Tesco by standardizing processes, delivering cost savings, enabling agility through technological solutions, and empowering colleagues. Tesco Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India, dedicated to various roles including Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and others.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
You will be joining EXL, a leading operations management and analytics company dedicated to helping businesses thrive in competitive and disruptive environments. Through our innovative methodologies encompassing advanced analytics, data management, digital solutions, BPO, consulting, industry best practices, and cutting-edge technology platforms, we delve deep to support companies in enhancing global operations, fostering data-driven insights, improving customer satisfaction, and effectively managing risk and compliance. EXL caters to various industries such as insurance, healthcare, banking and financial services, utilities, travel, transportation, and logistics, with a team of over 30,000 professionals located across the United States, Europe, Asia (mainly India and the Philippines), Latin America, Australia, and South Africa. In the realm of EXL Analytics, we provide actionable solutions to business challenges through statistical data mining, advanced analytics techniques, and a consultative approach. By utilizing our proprietary methodology and top-tier technology, EXL Analytics adopts an industry-specific strategy to revolutionize clients" decision-making processes and embed analytics deeply into their business operations. Our team of nearly 2,000 data scientists and analysts worldwide support client organizations in diverse areas such as risk minimization, advanced marketing strategies, pricing and CRM techniques, internal cost analysis, and optimizing resources within the organization. EXL Analytics caters to industries like insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation, and logistics. Your role will involve being a proficient credit risk model professional with expertise in model monitoring, validation, implementation, and maintenance of regulatory models. Some of your key responsibilities will include assisting in various aspects of model risk management in alignment with regulations, conducting essential tests such as model performance evaluations, sensitivity analyses, and back-testing, collaborating with the model governance team on model development and monitoring, liaising with cross-functional teams including business stakeholders, model validation, and governance teams, delivering high-quality client services encompassing model documentation within stipulated timeframes. To excel in this role, you should have a minimum of 2+ years of experience in executing end-to-end monitoring, validation, production, and implementation of credit risk models, particularly CCAR/CECL/IFRS9 regulatory models. You should possess a strong comprehension of credit risk regulatory models, along with expertise in marketing and general analytics problems. Your ability to manage assigned projects efficiently, ensuring accuracy and timely deliverables, as well as training, coaching, and developing team members, will be crucial for success. For qualifications, you should have prior analytics experience (2+ years), preferably in the BFSI sector, with a good understanding of General Analytics and Fraud Analytics. Past experience in roles involving problem-solving and strategic initiatives is advantageous. Proficiency in SAS/SAS macros, Python, or SQL is essential, while hands-on experience in R or any other analytical software would be a plus. Strong problem-solving skills will further enhance your suitability for this role.,
Posted 1 week ago
2.0 - 10.0 years
0 Lacs
karnataka
On-site
As an MLOps Engineer at our Bangalore location, you will play a pivotal role in designing, developing, and maintaining robust MLOps pipelines for generative AI models on AWS. With a Bachelor's or Master's degree in Computer Science, Data Science, or a related field, you should have at least 2 years of proven experience in building and managing MLOps pipelines, preferably in a cloud environment like AWS. Your responsibilities will include implementing CI/CD pipelines to automate model training, testing, and deployment workflows. You should have a strong grasp of containerization technologies such as Docker, container orchestration platforms, and AWS services like SageMaker, Bedrock, EC2, S3, Lambda, and CloudWatch. Practical knowledge of CI/CD principles and tools, along with experience working with large language models, will be essential for success in this role. Additionally, your role will involve driving technical discussions, explaining options to both technical and non-technical audiences, and ensuring software product cost monitoring and optimization. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow, familiarity with generative AI models, and experience with infrastructure-as-code tools like Terraform or CloudFormation will be advantageous. Moreover, knowledge of model monitoring and explainability techniques, various data storage and processing technologies, and experience with other cloud platforms like GCP will further enhance your capabilities. Any contributions to open-source projects related to MLOps or machine learning will be a valuable asset. At CGI, we believe in ownership, teamwork, respect, and belonging. Your work as an MLOps Engineer will focus on turning meaningful insights into action, with opportunities to develop innovative solutions, build relationships with teammates and clients, and access global capabilities to scale your ideas. Join us in shaping your career at one of the largest IT and business consulting services firms globally.,
Posted 1 week ago
4.0 - 8.0 years
0 - 0 Lacs
Kochi, Coimbatore
Work from Office
Job Responsibilities: Data Collection and Preparation: Collaborate with cross-functional teams, including domain experts, to collect and preprocess relevant data from multiple sources (e.g., healthcare records, financial transactions). Conduct thorough data analysis, cleaning, and feature engineering to transform raw data into usable formats for model development. Ensure data privacy and security compliance when handling sensitive data. Machine Learning Model Development: Design, build, and implement machine learning models tailored to solve specific business challenges Select appropriate datasets and data representation methods for various projects (e.g., time-series data, structured/unstructured data). Experiment with and optimize algorithms (e.g., supervised learning, unsupervised learning, anomaly detection) to improve model accuracy and efficiency. Perform statistical analysis, test different hypotheses, and fine-tune models using test results. Model Optimization and Evaluation: Evaluate model performance using various metrics (accuracy, precision, recall, F1 score, etc.) and apply techniques such as cross-validation, hyperparameter tuning Continuously improve models based on feedback, new data, and evolving business requirements. Ensure models can scale effectively in production environments. Model Deployment and Monitoring: Deploy machine learning models into production using cloud services (AWS, GCP, Azure) or containerized environments (Docker, Kubernetes). Oversee and monitor the performance of models in real-time, ensuring they provide accurate and actionable insights. Implement model retraining and continuous learning processes to maintain performance over time. Research and Innovation: Stay current with the latest developments in machine learning, AI, and relevant technologies by actively researching and experimenting with new algorithms, models, and techniques. Propose and integrate novel approaches into existing workflows to improve model accuracy, efficiency, and scalability. Required Qualifications: Technical Skills: Programming Languages : Proficiency in Python, Java, or R, with strong expertise in Python for machine learning tasks. Machine Learning Frameworks : Familiarity with TensorFlow, PyTorch, Keras, and scikit-learn for developing and optimizing models. Mathematics & Statistics : Strong foundation in linear algebra, calculus, probability, and statistics to understand and apply machine learning algorithms effectively. Big Data Technologies : Experience working with tools like Hadoop, Apache Spark for handling large datasets. Cloud Computing : Hands-on experience with cloud platforms such as AWS, GCP, or Azure for deploying and maintaining machine learning models. Version Control & Deployment : Proficient with version control tools like Git and experience in CI/CD pipelines for model deployment. Domain-Specific Knowledge (Desired): Healthcare/Medical Data : Understanding of medical datasets (e.g., patient records, diagnostic imaging) and how to build models for healthcare applications (e.g., predicting neurological disorders). Financial Data/Fraud Detection : Experience with financial transaction data and building systems for detecting fraudulent activity and financial anomalies (e.g., Anti-Money Laundering models). Soft Skills: Problem-Solving : Strong analytical and problem-solving skills, with the ability to approach challenges in creative and innovative ways. Communication : Excellent written and verbal communication skills, with the ability to explain complex machine learning concepts to non-technical stakeholders. Team Collaboration : Ability to work collaboratively in an interdisciplinary team, sharing knowledge and working towards common goals.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a part of Barclays Analyst Impairment team, you will play a crucial role in embedding control functionality by leading the development of the team's output. Your responsibilities will include helping colleagues demonstrate analytical and technical skills, along with knowledge of retail credit risk management fundamentals, especially in impairment management. It is essential to showcase sound judgment while collaborating with the wider team and management. To excel in this role, you will need to: - Provide commentary in various forums and own IFRS9 risk models throughout their lifecycle, from data governance to monitoring. - Develop Post Model Adjustments (PMA) to address inaccuracies and underperformance in models. - Review monitoring reports to identify reasons for model underperformance and liaise with modelling teams. - Design and implement tactical and strategic remediation, as well as support the production of commentary packs for multiple forums and group impairment committee. Additionally, some highly valued skills for this role include: - Review and challenge IFRS9 impairment models, both SPOT and Forecasting. - Produce annual and monthly forecasts for IFRS9. - Maintain management information related to impairment metrics such as stock coverage. - Possess a working knowledge of key regulatory requirements for IFRS9 and apply them to existing processes and reporting. - Present results and communicate effectively with management and stakeholders. - Foster a culture of decision-making through the provision of robust and accurate analyses. Your performance in this role will be evaluated based on key critical skills relevant for success, including risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. This position is based in Noida. Purpose of the role: The primary purpose of this role is to evaluate and assess the potential impairment of financial assets, ensuring accurate reflection of the bank's economic value of its assets in financial statements. Accountabilities: - Identify potential impairment triggers and assess the potential for impairment of financial assets by analyzing relevant financial and non-financial information. - Apply quantitative and qualitative impairment tests to determine asset impairment. - Calculate impairment provision, reflect impairment loss, and prepare accurate impairment disclosures for financial statements. - Manage the performance of impaired assets and reassess their impairment status regularly. Analyst Expectations: - Perform activities timely and to a high standard, consistently driving continuous improvement. - Possess in-depth technical knowledge and experience in the assigned area of expertise. - Lead and supervise a team, guide professional development, allocate work requirements, and coordinate team resources. - Demonstrate leadership behaviours for creating an environment for colleagues to thrive and deliver excellently. All colleagues are expected to embody the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as demonstrate the Barclays Mindset of Empower, Challenge, and Drive in their daily behaviors and actions.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As a Data Scientist at our company, you will play a crucial role in supporting the development and deployment of machine learning models and analytics solutions that enhance decision-making processes throughout the mortgage lifecycle, spanning from acquisition to servicing. Your responsibilities will involve building predictive models, customer segmentation tools, and automation workflows to drive operational efficiency and improve customer outcomes. Collaborating closely with senior data scientists and cross-functional teams, you will be tasked with translating business requirements into well-defined modeling tasks, with opportunities to leverage natural language processing (NLP), statistical modeling, and experimentation frameworks within a regulated financial setting. You will report to a senior leader in Data Science. Your key responsibilities will include: - Developing and maintaining machine learning models and statistical tools for various use cases such as risk scoring, churn prediction, segmentation, and document classification. - Working collaboratively with Product, Engineering, and Analytics teams to identify data-driven opportunities and support automation initiatives. - Translating business inquiries into modeling tasks, contributing to experimental design, and defining success metrics. - Assisting in the creation and upkeep of data pipelines and model deployment workflows in collaboration with data engineering. - Applying techniques such as supervised learning, clustering, and basic NLP to structured and semi-structured mortgage data. - Supporting model monitoring, performance tracking, and documentation to ensure compliance and audit readiness. - Contributing to internal best practices, engaging in peer reviews, and participating in knowledge-sharing sessions. - Staying updated with advancements in machine learning and analytics pertinent to the mortgage and financial services sector. Qualifications: - Minimum education required: Masters or PhD in engineering, math, statistics, economics, or a related field. - Minimum years of experience required: 2 (or 1 post-PhD), preferably in mortgage, fintech, or financial services. - Required certifications: None Specific skills or abilities needed: - Experience working with structured and semi-structured data; exposure to NLP or document classification is advantageous. - Understanding of the model development lifecycle, encompassing training, validation, and deployment. - Familiarity with data privacy and compliance considerations (e.g., ECOA, CCPA, GDPR) is desirable. - Strong communication skills and the ability to present findings to both technical and non-technical audiences. - Proficiency in Python (e.g., scikit-learn, pandas), SQL, and familiarity with ML frameworks like TensorFlow or PyTorch.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As an Analyst in Impairment at Barclays, you will play a crucial role in embedding control functionality by leading the development of outputs for the team. Your responsibilities will include supporting colleagues in demonstrating analytical and technical skills, as well as knowledge of retail credit risk management fundamentals, especially in impairment management. Collaboration with the wider team and management will require sound judgment from you. To excel in this role, you should be able to provide commentary for various forums, own IFRS9 risk models throughout their lifecycle, develop Post Model Adjustments, review model monitoring reports, design and implement remediation strategies, and support the production of commentary packs and decks for multiple forums and committees. Some other key skills that are highly valued for this role include reviewing and challenging IFRS9 impairment models, producing annual and monthly forecasts, maintaining management information on impairment metrics, understanding key regulatory requirements for IFRS9, presenting results to stakeholders, and fostering a culture of decision-making through robust analyses. You may undergo an assessment based on critical skills such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, and job-specific technical skills. This role is based in Noida. The purpose of this role is to evaluate and assess the potential impairment of financial assets to ensure accurate reflection of the bank's economic value of assets in its financial statements. Your accountabilities will include identifying potential impairment triggers, analyzing relevant information, applying impairment tests, assessing impairment loss, calculating impairment provisions, managing impaired assets" performance, and reassessing their impairment status regularly. As an Analyst, you are expected to perform activities timely and to a high standard, demonstrate in-depth technical knowledge, lead and supervise a team, guide professional development, and exhibit clear leadership behaviours. You will impact related teams, partner with other functions, manage risk, and strengthen controls. All colleagues at Barclays are expected to uphold the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, along with demonstrating the Barclays Mindset of Empower, Challenge, and Drive to guide their behavior and actions.,
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
noida, uttar pradesh
On-site
You are an experienced OCI AI Architect who will be responsible for leading the design and deployment of Gen AI, Agentic AI, and traditional AI/ML solutions on Oracle Cloud. Your role will involve a deep understanding of Oracle Cloud Architecture, Gen AI, Agentic and AI/ML frameworks, data engineering, and OCI-native services. The ideal candidate will possess a combination of deep technical expertise in AI/ML and Gen AI over OCI along with domain knowledge in Finance and Accounting. Your key responsibilities will include designing, architecting, and deploying AI/ML and Gen AI solutions on OCI using native AI services, building agentic AI solutions using frameworks such as LangGraph, CrewAI, and AutoGen, leading the development of machine learning AI/ML pipelines, and providing technical guidance on MLOps, model versioning, deployment automation, and AI governance. You will collaborate with functional SMEs, application teams, and business stakeholders to identify AI opportunities, advocate for OCI-native capabilities, and support customer presentations and solution demos. To excel in this role, you should have 10-15 years of experience in Oracle Cloud and AI, with at least 5 years of proven experience in designing, architecting, and deploying AI/ML & Gen AI solutions over OCI AI stack. Strong Python development experience, knowledge of LLMs such as Cohere and GPT, proficiency in AI/ML/Gen AI frameworks like TensorFlow, PyTorch, Hugging Face, and hands-on experience with OCI services are required. Additionally, skills in AI governance, Agentic AI frameworks, AI architecture principles, and leadership abilities are crucial for success. Qualifications for this position include Oracle Cloud certifications such as OCI Architect Professional, OCI Generative AI Professional, OCI Data Science Professional, as well as a degree in Computer Science or MCA. Any degree or diploma in AI would be preferred. Experience with front-end programming languages, Finance domain solutions, Oracle Cloud deployment, and knowledge of Analytics and Data Science would be advantageous. If you are a highly skilled and experienced OCI AI Architect with a passion for designing cutting-edge AI solutions on Oracle Cloud, we invite you to apply and join our team for this exciting opportunity.,
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
karnataka
On-site
The Data Science Lead position at our company is a key role that requires a skilled and innovative individual to join our dynamic team. We are looking for a Data Scientist with hands-on experience in Copilot Studio, M365, Power Platform, AI Foundry, and integration services. In this role, you will collaborate closely with cross-functional teams to design, build, and deploy intelligent solutions that drive business value and facilitate digital transformation. Key Responsibilities: - Develop and deploy AI models and automation solutions using Copilot Studio and AI Foundry. - Utilize M365 tools and services to integrate data-driven solutions within the organization's digital ecosystem. - Design and implement workflows and applications using the Power Platform (Power Apps, Power Automate, Power BI). - Establish, maintain, and optimize data pipelines and integration services to ensure seamless data flow across platforms. - Engage with stakeholders to comprehend business requirements and translate them into actionable data science projects. - Communicate complex analytical insights effectively to both technical and non-technical audiences. - Continuously explore and assess new tools and technologies to enhance existing processes and solutions. Required Skills and Qualifications: - 5-12 years of overall experience. - Demonstrated expertise in Copilot Studio, M365, Power Platform, and AI Foundry. - Strong understanding of data integration services and APIs. - Proficient in data modeling, data visualization, and statistical analysis. - Experience with cloud platforms like Azure or AWS is advantageous. - Exceptional problem-solving and critical-thinking abilities. - Effective communication and collaboration skills. - Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field. Preferred Skills: - Familiarity with additional AI and machine learning frameworks. - Knowledge of agile methodologies and collaborative tools. - Certifications in Power Platform, Azure AI, or M365 are a plus. If you are a proactive and results-driven individual with a passion for data science and a desire to contribute to cutting-edge projects, we encourage you to apply for this challenging and rewarding opportunity as a Data Science Lead with our team.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a highly experienced Senior Python & AI Engineer who will lead the development of innovative AI/ML solutions. Your strong background in Python programming, deep learning, machine learning, and proven leadership skills will drive the team towards delivering high-quality AI systems. You will architect solutions, define technical strategies, mentor team members, and ensure timely project deliveries. In the technical realm, you will be responsible for designing and implementing scalable AI/ML systems and backend services using Python. Your role includes overseeing the development of machine learning pipelines, APIs, and model deployment workflows. Reviewing code, establishing best practices, and maintaining technical quality across the team will be key aspects of your responsibilities. As a team leader, you will guide data scientists, ML engineers, and Python developers, providing mentorship, coaching, and conducting performance evaluations. You will facilitate agile practices such as sprint planning, daily stand-ups, and retrospectives. Collaboration with cross-functional teams, including product, QA, DevOps, and UI/UX, will be essential for timely feature deliveries. In AI/ML development, you will develop and optimize models for NLP, computer vision, or structured data analysis based on project requirements. Implementing model monitoring, drift detection, and retraining strategies will contribute to the success of AI initiatives. You will work closely with product managers to translate business needs into technical solutions and ensure the end-to-end delivery of features meeting performance and reliability standards. Your technical skills should include over 5 years of Python experience, 2+ years in AI/ML projects, a deep understanding of ML/DL concepts, and proficiency in frameworks like PyTorch, TensorFlow, and scikit-learn. Experience with deployment tools, cloud platforms, and CI/CD pipelines is required. Leadership qualities such as planning, estimation, and effective communication are also crucial, with at least 3 years of experience leading engineering or AI teams. Preferred qualifications include a Masters or PhD in relevant fields, exposure to MLOps practices, and familiarity with advanced AI technologies. Prior experience in fast-paced startup environments is a plus. In return, you will have the opportunity to spearhead cutting-edge AI initiatives, face diverse challenges, enjoy autonomy, and receive competitive compensation with performance-based incentives.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Python Developer within our Information Technology department, your primary responsibility will be to leverage your expertise in Artificial Intelligence (AI), Machine Learning (ML), and Generative AI. We are seeking a candidate who possesses hands-on experience with GPT-4, transformer models, and deep learning frameworks, along with a profound comprehension of model fine-tuning, deployment, and inference. Your key responsibilities will include designing, developing, and maintaining Python applications that are specifically tailored towards AI/ML and generative AI. You will also be involved in building and refining transformer-based models such as GPT, BERT, and T5 for various NLP and generative tasks. Working with extensive datasets for training and evaluation purposes will be a crucial aspect of your role. Moreover, you will be tasked with implementing model inference pipelines and scalable APIs utilizing FastAPI, Flask, or similar technologies. Collaborating closely with data scientists and ML engineers will be essential in creating end-to-end AI solutions. Staying updated with the latest research and advancements in the realms of generative AI and ML is imperative for this position. From a technical standpoint, you should demonstrate a strong proficiency in Python and its relevant libraries like NumPy, Pandas, and Scikit-learn. With at least 7+ years of experience in AI/ML development, hands-on familiarity with transformer-based models, particularly GPT-4, LLMs, or diffusion models, is required. Experience with frameworks like Hugging Face Transformers, OpenAI API, TensorFlow, PyTorch, or JAX is highly desirable. Additionally, expertise in deploying models using Docker, Kubernetes, or cloud platforms like AWS, GCP, or Azure will be advantageous. Having a knack for problem-solving and algorithmic thinking is crucial for this role. Familiarity with prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) would be a valuable asset. Any contributions to open-source AI/ML projects, experience with vector databases, building AI chatbots, copilots, or creative content generators, and knowledge of MLOps and model monitoring will be considered as added advantages. In terms of educational qualifications, a Bachelor's degree in Science (B.Sc), Technology (B.Tech), or Computer Applications (BCA) is required. A Master's degree in Science (M.Sc), Technology (M.Tech), or Computer Applications (MCA) would be an added benefit for this role.,
Posted 2 weeks ago
6.0 - 11.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Design, develop, implement AI and Generative AI solutions to address business problems and achieveobjectives. Gather, clean, and prepare large datasets to ensure readiness for AI model training Train, fine-tune, evaluate, andoptimizeAI models for specific use cases, ensuring accuracy, performance, cost-effectiveness, and scalability. Seamlessly integrate AI models and autonomous agent solutions into cloud-based & on-prem products to drive smarter workflows and improved productivity. Develop reusable tools, libraries, and components that standardize and accelerate the development of AI solutions across the organization. Monitor andmaintaindeployed models, ensuring consistent performance and reliability in production environments Stay up to date with the latest AI/ML advancements, exploringnew technologies, algorithms, and methodologies to enhance product capabilities. Effectively communicate technical concepts, research findings, and AI solution strategies to both technical and non-technical stakeholders. Understand the IBM tool and model landscape and work closely with cross-functional teams toleveragethese tools, driving innovation and alignment. Lead and mentor team members to improve performance. Collaborate with operations, architects, and product teams to resolve issues and define product designs. Exercise best practices in agile development and software engineering.Code, unit test, debug and perform integration tests of software components Participatein software design reviews, code reviews and project planning. Write and review documentation and technical blog posts. Contribute to department attainment of organizationalobjectivesand high customer satisfaction Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 6 years of hands-on experience developing AI-based applications using Python. 2+ years in Performance testing, Reliability testing 2+ years of experience using deep learning frameworks (TensorFlow,PyTorch, orKeras) Solid understanding of ML/AI conceptsEDA, preprocessing, algorithm selection, machine learning frameworks, model efficiency metrics, model monitoring. Familiarity with Natural Language Processing (NLP) techniques. Deep understanding of Large Language Models (LLM) Architectures, theircapabilitiesand limitations. Provenexpertisein integrating and working with LLMs to build robust AI solutions. Skilled in crafting effective prompts to guide LLMs to provide desired outputs. Hands-on experience with LLM frameworks such asLangchain,Langraph,CrewAIetc., Experience in LLM application development based on Retrieval-Augmented Generation (RAG) concept, familiarity with vector databases, and fine-tuning large language models (LLMs) to enhance performance and accuracy. Proficient in microservices development using Python (Django/Flask or similar technologies). Experience in Agile development methodologies Familiarity with platforms like Kubernetes and experience building on top of the native platforms Experience with cloud-based data platforms and services (e.g., IBM, AWS, Azure, Google Cloud). Experience designing, building, andmaintainingdata processing systems working in containerized environments (Docker, OpenShift, k8s) Excellent communication skills with the ability to effectively collaborate with technical and non-technical stakeholders Preferred technical and professional experience Experience in MLOPs frameworks (BentoML,Kubefloworsimilar technologies) and exposure to LLMOPs Experience in cost optimisation initiatives Experience with end-to-end chatbot development, including design, deployment, and ongoing optimization,leveragingNLPand integrating with backend systems and APIs. Understanding of security and ethical best practices for data and model development Contributions toopen sourceprojects
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
At OP, we are a people-first, high-touch organization committed to delivering cutting-edge AI solutions with integrity and passion. We are looking for a Senior AI Developer with expertise in AI model development, Python, AWS, and scalable tool-building. In this role, you will be responsible for designing and implementing AI-driven solutions, developing AI-powered tools and frameworks, and integrating them into enterprise environments, including mainframe systems. As a Senior AI Developer at OP, your key responsibilities will include developing and deploying AI models using Python and AWS for enterprise applications, building scalable AI-powered tools, designing and optimizing machine learning pipelines, implementing NLP and GenAI models, developing and integrating RAG systems for enterprise knowledge retrieval, maintaining AI frameworks and APIs, architecting cloud-based AI solutions using AWS services, writing high-performance Python code for AI applications, and ensuring scalability, security, and performance of AI solutions in production. The required qualifications for this role include 5+ years of experience in AI/ML development with expertise in Python and AWS, a strong background in machine learning and deep learning, experience in LLMs, NLP, and RAG systems, hands-on experience in building and deploying AI models in production, proficiency in cloud-based AI solutions, experience in developing AI-powered tools and frameworks, knowledge of mainframe integration and enterprise AI applications, and strong coding skills with a focus on software development best practices. Preferred qualifications for this role include familiarity with MLOps, CI/CD pipelines, and model monitoring, a background in developing AI-based enterprise tools and automation, and experience with vector databases and AI-powered search technologies. At OP, we offer health insurance, accident insurance, and competitive salaries based on various factors including location, education, qualifications, experience, technical skills, and business needs. In addition to the core responsibilities, you will also be expected to participate in OP monthly team meetings, contribute to technical discussions, peer reviews, and collaborate via the OP-Wiki/Knowledge Base, as well as provide status reports to OP Account Management as requested. OP is a technology consulting and solutions company that offers advisory and managed services, innovative platforms, and staffing solutions across various fields, including AI, cyber security, and enterprise architecture. Our team consists of dynamic, creative thinkers who are passionate about quality work, and as a member of the OP team, you will have access to industry-leading consulting practices, strategies, technologies, training, and education. An ideal OP team member is a technology leader with a proven track record of technical excellence and a strong focus on process and methodology.,
Posted 2 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/ GenAI models on AWS using GitHub Actions and CodePipeline . (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
9.0 - 12.0 years
60 - 80 Lacs
Pune
Hybrid
Staff Engineer Pattern is a leading e-commerce company that helps brands grow their business on platforms like Amazon. We are committed to innovation and excellence, leveraging data-driven insights to drive partner success. Our team is composed of passionate individuals who are dedicated to making a difference in the e-commerce landscape. The Staff Engineer leads and oversees the engineering function in developing, releasing, and maintaining AI workflows and agentic systems according to business needs. You will play a crucial role in setting and promoting engineering standards and practices that are used throughout the company. As a Staff Engineer, together with term's Data Science and Engineering teams, you will lead a team that creates and maintains impactful solutions for our brands across the world. From traditional machine learning to large language models, you will work and lead throughout the model lifecycle. Responsibilities: • Pipeline Management: Architect, implement, and maintain scalable ML pipelines, with seamless integration from data ingestion to production deployment. • Model Monitoring: Lead the operationalization of machine learning models, ensuring hundreds of models are continuously monitored, retrained, and optimized in real-time environments • Deployment: Deploy machine learning solutions in the cloud, securely and cost effectively. • Reporting: Effectively communicate actionable insights across teams using both automatic (e.g., alerts) and non-automatic methods. • Leadership: MLOps is a team sport, and we require a leader who can elevate everyone in the MLOps organization. While technical skills and vision are required, your leadership skills will take AI and machine learning from theoretical to operational, delivering tangible value to both customers and internal teams. The type of game changing candidate we are looking for: • Seasoned: Demonstrated experience successfully leading teams both formally and informally. • Transparent: Willingness to identify and admit errors and seek out opportunities to continually improve both in their own work and across the team. • Communication: MLOps is a central node in a complex system. Clear, actionable, and concise communication, both written and verbal is a must. • Coaching and Team Advancement: An MLOps leader is continually developing team members and fostering a constant flow of communication and improvement across team members. • Master's/PhD degree or a strong demonstration of technical expertise in Computer Science, Machine Learning, Data Science, or a related field • Multiple years of direct extensive experience with AWS • Multiple years of experience with MLOps monitoring and testing tools • Ability to prioritize projects effectively once clear vision and goals are identified • Excited to empower DS with tools, practices, and training that simplify MLOps enough for Data Science to increasingly practice MLOps on their own and own products in production. Our Core Values • Data Fanatics: Our edge is always found in the data. • Partner Obsessed: We are obsessed with partner success. • Team of Doers: We have a bias for action. • Gamechangers: We encourage innovation. Join Us at Pattern At Pattern, we believe in fostering a culture of innovation and growth. We are looking for talented individuals who are passionate about making an impact in the e-commerce industry. If you are ready to take on new challenges and be part of a dynamic team, we invite you to apply and join us on our journey to redefine e-commerce success.
Posted 3 weeks ago
5.0 - 10.0 years
22 - 30 Lacs
Pune
Hybrid
We are looking for a Machine Learning Engineer with expertise in MLOps (Machine Learning Operations) or LLMOps (Large Language Model Operations) to design, deploy, and maintain scalable AI/ML systems. You will work on automating ML workflows, optimizing model deployment, and managing large-scale AI applications, including LLMs (Large Language Models) , ensuring they run efficiently in production. Key Responsibilities: Design and implement end-to-end MLOps pipelines for training, validation, deployment, monitoring, and retraining of ML models. Optimize and fine-tune large language models (LLMs) for various applications, ensuring performance and efficiency. Develop CI/CD pipelines for ML models to automate deployment and monitoring in production. Monitor model performance, detect drift , and implement automated retraining mechanisms. Work with cloud platforms ( AWS, GCP, Azure ) and containerization technologies ( Docker, Kubernetes ) for scalable deployments. Implement best practices in data engineering , feature stores, and model versioning. Collaborate with data scientists, engineers, and product teams to integrate ML models into production applications. Ensure compliance with security, privacy, and ethical AI standards in ML deployments. Optimize inference performance and cost of LLMs using quantization, pruning, and distillation techniques . Deploy LLM-based APIs and services, integrating them with real-time and batch processing pipelines. Key Requirements: Technical Skills: Strong programming skills in Python, with experience in ML frameworks ( TensorFlow, PyTorch, Hugging Face, JAX ). Experience with MLOps tools (MLflow, Kubeflow, Vertex AI, SageMaker, Airflow). Deep understanding of LLM architectures , prompt engineering, and fine-tuning. Hands-on experience with containerization (Docker, Kubernetes) and orchestration tools . Proficiency in cloud services (AWS/GCP/Azure) for ML model training and deployment. Experience with monitoring ML models (Prometheus, Grafana, Evidently AI). Knowledge of feature stores (Feast, Tecton) and data pipelines (Kafka, Apache Beam). Strong background in distributed computing (Spark, Ray, Dask) . Soft Skills: Strong problem-solving and debugging skills. Ability to work in cross-functional teams and communicate complex ML concepts to stakeholders. Passion for staying updated with the latest ML and LLM research & technologies . Preferred Qualifications: Experience with LLM fine-tuning , Reinforcement Learning with Human Feedback ( RLHF ), or LoRA/PEFT techniques . Knowledge of vector databases (FAISS, Pinecone, Weaviate) for retrieval-augmented generation ( RAG ). Familiarity with LangChain, LlamaIndex , and other LLMOps-specific frameworks. Experience deploying LLMs in production (ChatGPT, LLaMA, Falcon, Mistral, Claude, etc.) .
Posted 1 month ago
2.0 - 7.0 years
7 - 17 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Model Monitoring/Model Validation EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning methodologies, that integrate advanced analytics, data management, digital, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 30,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Home EXL Service is a global analytics and digital solutions company serving industries including insurance, healthcare, banking and financial services, media, retail, and others Role Details : We are seeking a strong credit risk model professional with experience in model monitoring, validation, implementation and maintenance of regulatory models. Responsibilities: Helping with various aspects of model validation Perform all required tests (e.g. model performance, sensitivity, back-testing, etc.) Interact with model governance team on model build and model monitoring Work closely with cross functional teams including business stakeholders, model validation and governance teams Deliver high quality client services, including model documentations, within expected timeframes Requirements : Minimum 2+ years of experience in executing end to end monitoring/validation/production/implementation of risk model validation/monitoring understanding with respect to marketing/general analytics problems Managing assigned projects in a timely manner, ensuring accuracy and that deliverables are met. Training, coaching and development of team members Qualifications: Previous experience (2+ years) in analytics, preferably in BFSI Good knowledge in General Analytics, Fraud Analytics Past experience in problem solving roles, strategic initiatives Good problem-solving skills
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
About the Team and Our Scope We are a forward-thinking tech organization within Swiss Re, delivering transformative AI/ML solutions that redefine how businesses operate. Our mission is to build intelligent, secure, and scalable systems that deliver real-time insights, automation, and high-impact user experiences to clients globally. You'll join a high-velocity AI/ML team working closely with product managers, architects, and engineers to create next-gen enterprise-grade solutions. Our team is built on a startup mindset - bias to action, fast iterations, and ruthless focus on value delivery. We're not only shaping the future of AI in business - we're shaping the future of talent. This role is ideal for someone passionate about advanced AI engineering today and curious about evolving into a product leadership role tomorrow. You'll get exposure to customer discovery, roadmap planning, and strategic decision-making alongside your technical contributions. Role Overview As an AI/ML Engineer, you will play a pivotal role in the research, development, and deployment of next-generation GenAI and machine learning solutions . Your scope will go beyond retrieval-augmented generation (RAG) to include areas such as prompt engineering, long-context LLM orchestration, multi-modal model integration (voice, text, image, PDF), and agent-based workflows. You will help assess trade-offs between RAG and context-native strategies, explore hybrid techniques, and build intelligent pipelines that blend structured and unstructured data. You'll work with technologies such as LLMs, vector databases, orchestration frameworks, prompt chaining libraries, and embedding models, embedding intelligence into complex, business-critical systems. This role sits at the intersection of rapid GenAI prototyping and rigorous enterprise deployment, giving you hands-on influence over both the technical stack and the emerging product direction. Key Responsibilities Build Next-Gen GenAI Pipelines : Design, implement, and optimize pipelines across RAG, prompt engineering, long-context input handling, and multi-modal processing. Prototype, Validate, Deploy : Rapidly test ideas through PoCs, validate performance against real-world business use cases, and industrialize successful patterns. Ingest, Enrich, Embed: Construct ingestion workflows including OCR, chunking, embeddings, and indexing into vector databases to unlock unstructured data. Integrate Seamlessly: Embed GenAI services into critical business workflows, balancing scalability, compliance, latency, and observability. Explore Hybrid Strategies: Combine RAG with context-native models, retrieval mechanisms, and agentic reasoning to build robust hybrid architectures. Drive Impact with Product Thinking : Collaborate with product managers and UX designers to shape user-centric solutions and understand business context. Ensure Enterprise-Grade Quality: Deliver solutions that are secure, compliant (e.g., GDPR), explainable, and resilient - especially in regulated environments. What Makes You a Fit Must-Have Technical Expertise Proven experience with GenAI techniques and LLMs , including RAG, long-context inference, prompt tuning, and multi-modal integration. Strong hands-on skills with Python , embedding models, and orchestration libraries (e.g., LangChain, Semantic Kernel, or equivalents). Comfort with MLOps practices , including version control, CI/CD pipelines, model monitoring, and reproducibility. Ability to operate independently, deliver iteratively, and challenge assumptions with data-driven insight. Understanding of vector search optimization and retrieval tuning. Exposure to multi-modal models Nice-To-Have Qualifications Experience building and operating AI systems in regulated industries (e.g., insurance, finance, healthcare). Familiarity with Azure AI ecosystem (e.g., Azure OpenAI, Azure AI Document Intelligence, Azure Cognitive Search) and deployment practices in cloud-native environments. Experience with agentic AI architectures , tools like AutoGen, or prompt chaining frameworks. Familiarity with data privacy and auditability principles in enterprise AI. Bonus: You Think Like a Product Manager While this role is technical at its core, we highly value candidates who are curious about how AI features become products . If you're excited by the idea of influencing roadmaps, shaping requirements, or owning end-to-end value delivery - we'll give you space to grow into it. This is a role where engineering and product are not silos . If you're keen to move in that direction, we'll mentor and support your evolution. Why Join Us You'll be part of a team that's pushing AI/ML into uncharted, high-value territory. We operate with urgency, autonomy, and deep collaboration. You'll prototype fast, deliver often, and see your work shape real-world outcomes - whether in underwriting, claims, or data orchestration. And if you're looking to transition from deep tech to product leadership , this role is a launchpad. Swiss Re is an equal opportunity employer . We celebrate diversity and are committed to creating an inclusive environment for all employees. Keywords: Reference Code: 134317
Posted 1 month ago
1.0 - 2.0 years
3 - 4 Lacs
Pune
Work from Office
- Hands-on experience with Jupyter Notebooks, Google Colab, Git & GitHub .- Solid understanding of Data Visualization Tools and Dashboard Creation. - Prior teaching/training experience (online/offline) is a plus .- Excellent communication and presentati
Posted 1 month ago
6.0 - 11.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Design, develop, implement AI and Generative AI solutions to address business problems and achieveobjectives. Gather, clean, and prepare large datasets to ensure readiness for AI model training Train, fine-tune, evaluate, andoptimizeAI models for specific use cases, ensuring accuracy, performance, cost-effectiveness, and scalability. Seamlessly integrate AI models and autonomous agent solutions into cloud-based & on-prem products to drive smarter workflows and improved productivity. Develop reusable tools, libraries, and components that standardize and accelerate the development of AI solutions across the organization. Monitor andmaintaindeployed models, ensuring consistent performance and reliability in production environments Stay up to date with the latest AI/ML advancements, exploringnew technologies, algorithms, and methodologies to enhance product capabilities. Effectively communicate technical concepts, research findings, and AI solution strategies to both technical and non-technical stakeholders. Understand the IBM tool and model landscape and work closely with cross-functional teams toleveragethese tools, driving innovation and alignment. Lead and mentor team members to improve performance. Collaborate with operations, architects, and product teams to resolve issues and define product designs. Exercise best practices in agile development and software engineering.Code, unit test, debug and perform integration tests of software components Participatein software design reviews, code reviews and project planning. Write and review documentation and technical blog posts. Contribute to department attainment of organizationalobjectivesand high customer satisfaction Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Minimum 6 years of hands-on experience developing AI-based applications using Python. 2+ years in Performance testing, Reliability testing 2+ years of experience using deep learning frameworks (TensorFlow,PyTorch, orKeras) Solid understanding of ML/AI conceptsEDA, preprocessing, algorithm selection, machine learning frameworks, model efficiency metrics, model monitoring. Familiarity with Natural Language Processing (NLP) techniques. Deep understanding of Large Language Models (LLM) Architectures, theircapabilitiesand limitations. Provenexpertisein integrating and working with LLMs to build robust AI solutions. Skilled in crafting effective prompts to guide LLMs to provide desired outputs. Hands-on experience with LLM frameworks such asLangchain,Langraph,CrewAIetc., Experience in LLM application development based on Retrieval-Augmented Generation (RAG) concept, familiarity with vector databases, and fine-tuning large language models (LLMs) to enhance performance and accuracy. Proficient in microservices development using Python (Django/Flask or similar technologies). Experience in Agile development methodologies Familiarity with platforms like Kubernetes and experience building on top of the native platforms Experience with cloud-based data platforms and services (e.g., IBM, AWS, Azure, Google Cloud). Experience designing, building, andmaintainingdata processing systems working in containerized environments (Docker, OpenShift, k8s) Excellent communication skills with the ability to effectively collaborate with technical and non-technical stakeholders Preferred technical and professional experience Experience in MLOPs frameworks (BentoML,Kubefloworsimilar technologies) and exposure to LLMOPs Experience in cost optimisation initiatives Experience with end-to-end chatbot development, including design, deployment, and ongoing optimization,leveragingNLPand integrating with backend systems and APIs. Understanding of security and ethical best practices for data and model development Contributions toopen sourceprojects
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough