Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
3 - 10 Lacs
India
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 5 years JobType: full-time About The Role We’re seeking a versatile Full Stack Engineer to architect, develop, and deploy modern web applications that bridge user experiences with intelligent backend systems. This role sits at the intersection of product engineering and machine learning, requiring strong command over front-end interfaces, API development, and cloud-native deployment. You’ll work cross-functionally to deliver secure, scalable, and responsive platforms that power real-time AI workflows and intuitive user interfaces. What You’ll Own 💻 Front-End Engineering Build responsive, modular, and accessible UIs using ReactJS , JavaScript , and TypeScript . Implement reusable components, client-side routing, and state management using modern React practices (hooks, context, Redux). Optimize user interactions, performance, and cross-browser compatibility. 🧠 Back-End & API Architecture Develop and maintain scalable APIs using Django and Django REST Framework . Integrate Python-based deep learning or inference pipelines into backend workflows. Structure back-end services with robust security, authentication (JWT/OAuth), and validation. ☁️ Cloud & DevOps Deploy full-stack applications using AWS services (e.g., EC2, S3, Lambda, RDS). Automate infrastructure and CI/CD pipelines using Docker , GitHub Actions, or similar tools. Implement logging, monitoring, and health checks for production-grade systems. 🔍 Code Quality & Collaboration Collaborate with cross-functional teams (Data Science, DevOps, Product) to build cohesive platforms. Write clean, well-documented, and testable code. Participate in code reviews, performance profiling, and security audits. What You Bring ✅ Must-Have Skills 3–5 years of full-stack development experience in a product-driven environment. Expert proficiency with ReactJS , component libraries, and UI state management. Solid backend skills in Python , Django , and DRF for RESTful services. Experience deploying applications on AWS and working with services like EC2 , Lambda , S3 , and RDS . Familiarity with Docker , CI/CD pipelines , and version control via Git . Strong grasp of relational databases like MySQL or PostgreSQL. 🌟 Nice-to-Have Prior exposure to integrating AI/ML models or inference engines within web apps. Knowledge of scalable architecture patterns and asynchronous processing. Familiarity with serverless frameworks, container orchestration, or GraphQL APIs. Key Technologies ReactJS JavaScript/TypeScript Python Django/DRF AWS Docker CI/CD Pipelines MySQL/PostgreSQL Git REST APIs
Posted 3 weeks ago
3.0 years
3 - 10 Lacs
India
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 3 years JobType: full-time About The Role We are looking for a highly skilled Computer Vision Engineer with deep expertise in Generative AI to join our cutting-edge visual computing team. In this role, you’ll design and deploy AI-driven systems that enable photorealistic image generation, object placement, and scene modification—transforming how users visualize furniture in real-world environments. This is an opportunity to build next-gen visual intelligence tools using the latest in diffusion models, rendering pipelines, and large-scale AI systems. Key Responsibilities 🎨 Generative Vision Modeling Develop and refine diffusion-based generative models for tasks including object insertion, transformation, and high-resolution rendering. Solve challenges in furniture placement, replacement, and scene consistency , delivering ultra-realistic outputs across diverse environments. 🔬 AI Research & Implementation Stay at the forefront of visual AI—implementing novel architectures in Stable Diffusion, DDPM, NeRF, GANs , or transformer-based generation. Translate state-of-the-art research into production-grade models optimized for speed and accuracy. 🧠 Model Development & Deployment Architect, train, and deploy scalable models using PyTorch and GPU-accelerated workflows. Optimize pipelines for real-time inference, memory efficiency, and photorealistic rendering. 🔁 Collaboration & Integration Work cross-functionally with design, product, and engineering teams to develop AI features that enhance user experience. Collaborate with DevOps to package and serve models via APIs and microservices in live environments. Technical Requirements ✅ Core Expertise 3–8 years of experience in Computer Vision , Deep Learning , or Generative AI . Proven experience working on image generation, object detection, and replacement with generative techniques. Mastery in PyTorch , with deep understanding of diffusion models and training paradigms. 🔧 Toolset & Frameworks Hands-on experience with Stable Diffusion , GANs , NeRF , or similar architectures. Strong understanding of 3D rendering pipelines , including lighting, shading, and texture mapping. Proficiency in CUDA for accelerated model training and deployment. 📊 Additional Skills Experience with large datasets , preprocessing, and augmentation techniques. Background in image synthesis , segmentation , or scene understanding is a plus. Soft Skills & Mindset Strong problem-solving abilities with a research-meets-product mindset. Ability to communicate complex concepts clearly and effectively to technical and non-technical teams. Team-first attitude with a drive to experiment, innovate, and iterate fast. Tech Keywords Diffusion Models Generative AI PyTorch Stable Diffusion NeRF GANs Computer Vision Photorealistic Rendering CUDA Object Placement 3D Vision Image Synthesis
Posted 3 weeks ago
4.0 years
12 - 20 Lacs
Pune, Maharashtra, India
On-site
About Improzo At Improzo (Improve + Zoe; meaning Life in Greek), we believe in improving life by empowering our customers. Founded by seasoned Industry leaders, we are laser focused for delivering quality-led commercial analytical solutions to our clients. Our dedicated team of experts in commercial data, technology, and operations has been evolving and learning together since our inception. Here, you won't find yourself confined to a cubicle; instead, you'll be navigating open waters, collaborating with brilliant minds to shape the future. You will work with leading Life Sciences clients, seasoned leaders and carefully chosen peers like you! People are at the heart of our success, so we have defined our CARE values framework with a lot of effort, and we use it as our guiding light in everything we do. We CARE! Customer-Centric: Client success is our success. Prioritize customer needs and outcomes in every action. Adaptive: Agile and Innovative, with a growth mindset. Pursue bold and disruptive avenues that push the boundaries of possibilities. Respect: Deep respect for our clients & colleagues. Foster a culture of collaboration and act with honesty, transparency, and ethical responsibility. Execution: Laser focused on quality-led execution; we deliver! Strive for the highest quality in our services, solutions, and customer experiences. About The Role We're looking for a Data Scientist in Pune to drive insights for pharma clients using advanced ML, Gen AI, and LLMs on complex healthcare data. You'll optimize Pharma commercial strategies (forecasting, marketing, SFE) and improve patient outcomes (journey mapping, adherence, RWE). Key Responsibilities Data Exploration & Problem Framing: Proactively engage with client/business stakeholders (e.g., Sales, Marketing, Market Access, Commercial Operations, Medical Affairs, Patient Advocacy teams) to deeply understand their challenges and strategic objectives. Explore, clean, and prepare large, complex, and sometimes messy datasets from various sources, including but not limited to: sales data, prescription data, claims data, Electronic Health Records (EHRs), patient support program data, CRM data, and real-world evidence (RWE) datasets. Translate ambiguous business problems into well-defined data science questions and develop appropriate analytical frameworks. Advanced Analytics & Model Development Design, develop, validate, and deploy robust statistical models and machine learning algorithms (e.g., predictive models, classification, clustering, time series analysis, causal inference, natural language processing). Develop models for sales forecasting, marketing mix optimization, customer segmentation (HCPs, payers, pharmacies), sales force effectiveness (SFE) analysis, incentive compensation modelling, and market access analytics (e.g., payer landscape, formulary impact). Analyze promotional effectiveness and patient persistency/adherence. Build models for patient journey mapping, patient segmentation for personalized interventions, treatment adherence prediction, disease progression modelling, and identifying drivers of patient outcomes from RWE. Contribute to understanding patient behavior, unmet needs, and the impact of interventions on patient health. Generative AI & LLM Solutions Extracting insights from unstructured text data (e.g., clinical notes, scientific literature, sales call transcripts, patient forum discussions). Summarization of complex medical or commercial documents. Automated content generation for internal use (e.g., draft reports, competitive intelligence summaries). Enhancing data augmentation or synthetic data generation for model training. Developing intelligent search or Q&A systems for commercial or medical inquiries. Apply techniques like prompt engineering, fine-tuning of LLMs, and retrieval-augmented generation (RAG). Insight Generation & Storytelling Transform complex analytical findings into clear, concise, and compelling narratives and actionable recommendations for both technical and non-technical audiences. Create impactful data visualizations, dashboards, and presentations using tools like Tableau, Power BI, or Python/R/Alteryx visualization libraries. Collaboration & Project Lifecycle Management Collaborate effectively with cross-functional teams including product managers, data engineers, software developers, and other data scientists. Support the entire data science lifecycle, from conceptualization and data acquisition to model development, deployment (MLOps), and ongoing monitoring in production environments. Qualifications Master's or Ph.D. in Data Science, Statistics, Computer Science, Applied Mathematics, Economics, Bioinformatics, Epidemiology, or a related quantitative field. 4+ years progressive experience as a Data Scientist, with demonstrated success in applying advanced analytics to solve business problems, preferably within the healthcare, pharmaceutical, or life sciences industry using pharma dataset extensively (e.g. sales data from Iqvia, Symphony, Komodo, etc., CRM data from Veeva, OCE, etc.) Must-have: Solid understanding of pharmaceutical commercial operations (e.g., sales force effectiveness, marketing, market access, CRM). Must-have: Experience working with real-world patient data (e.g., claims, EHR, pharmacy data, patient registries) and understanding of patient journeys. Strong programming skills in Python (e.g., Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and/or R for data manipulation, statistical analysis, and machine learning. Expertise in SQL for data extraction, manipulation, and analysis from relational databases. Experience with machine learning frameworks and libraries. Proficiency in data visualization tools (e.g., Tableau, Power BI) and/or visualization libraries (e.g., Matplotlib, Seaborn, Plotly). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is a significant advantage. Specific experience with Natural Language Processing (NLP) techniques, Generative AI models (e.g., Transformers, diffusion models), Large Language Models (LLMs), and prompt engineering is highly desirable. Experience with fine-tuning LLMs, working with models from Hugging Face, or utilizing major LLM APIs (e.g., OpenAI, Anthropic, Google). Experience with MLOps practices and tools (e.g., MLflow, Kubeflow, Docker, Kubernetes). Knowledge of pharmaceutical or biotech industry regulations and compliance requirements like HIPAA, CCPA, SOC, etc. Excellent communication, presentation, and interpersonal skills, with the ability to effectively interact with both technical and non-technical stakeholders at all levels. Attention to details, biased for quality and client centricity. Ability to work independently and as part of a cross-functional team. Strong leadership, mentoring, and coaching skills. Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge Analytics projects, transforming the life sciences industry Collaborative and supportive work environment. Opportunities for professional development and growth. Skills: data manipulation,analytics,llm,generative ai,commercial pharma,mlops,sql,python,natural language processing,data visualization,models,r,machine learning,statistical analysis,genai,data,patient outcomes
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Systems Engineering General Summary: Qualcomm’s Audio Systems team is seeking a talented and highly motivated engineer specialized in the implementation of Voice AI and Audio solutions. You will work with a team to prototype, optimize, and productize state-of-the-art ML models, ensuring efficient deployment on snapdragon platforms Responsibilities: Develop, optimize, and deploy Voice AI and audio ML models for audio applications, with a focus on inference efficiency across NPUs, GPUs, and CPUs. Perform model evaluation, quantization, and compression to enable fast, robust inference on embedded hardware. Analyze and compare model architectures (such as Diffusion Models, U-Nets, Transformers, BERT, BART, etc.) for use in audio applications. Collaborate with cross-functional R&D, systems, and integration teams for system use case verification and commercialization support. Contribute to the design and software implementation of audio ML models in embedded C/C++ and Python. Evaluate system performance, debug, and optimize for performance and robustness. Participate in industry trends, benchmarking and performance analysis of various Model architecture, and bring up-to-date architectural or technical innovations to the team. Requirements: Strong programming skills in C/C++, Python. Experience with audio processing and embedded solutions. Hands-on experience working with audio framework and audio solutions on any platform Familiarity with ML frameworks (PyTorch, TensorFlow, ONNX, etc.). Knowledge of model quantization and compression techniques, and experience optimizing inference and deployment on embedded hardware. Strong understanding of ML model architectures such as, CNNs, RNNs, Transformers, U-Nets, and statistical modeling techniques. Understanding of DSP or Microcontroller architectures and frameworks Experience developing and debugging software on embedded platforms; familiarity with software design patterns, multi-threaded programming (e.g., POSIX, PTHREADS), and fixed-point coding. Excellent verbal and written communication skills; ability to work independently and as a team player in geographically dispersed, multidisciplinary teams. Proven ability to work in a dynamic, multi-tasked environment — quick learner, self-motivated, and results-driven. Minimum Qualifications: Bachelor’s, Master’s or PhD in Computer Science, Electronics and Communication, Electrical Engineering, or a related field (or equivalent work experience). Preferred Qualifications: Experience working with Qualcomm AI HW accelerators (NPUs) and Qualcomm SDKs Knowledge of Qualcomm Audio framework, platforms and tools Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3078103
Posted 3 weeks ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Job Title: Catastrophe Risk Modeller (Natural Perils – EQ, Cyclone, Rainfall) Location: Remote / Bengaluru / Mumbai Type: Full-time | Hybrid (Open to part-time for very senior consultants) Role Overview: We are seeking experienced Catastrophe Risk Modellers to play a foundational role in developing BhoomiSure’s in-house stochastic and parametric risk models for Earthquake, Cyclone, and Rainfall events. You will collaborate with actuaries, underwriters, technologists, and reinsurers to develop scientifically robust models that support our parametric and portfolio-level insurance products. Key Responsibilities: Develop stochastic event sets for tropical cyclones, earthquakes, and excess rainfall using historical, satellite, and reanalysis datasets. Build and calibrate hazard intensity footprints and vulnerability models tailored to Indian and regional geographies. Provide loss estimation frameworks for event-based and probabilistic scenarios across various asset classes (property, infrastructure, agriculture, etc.). Validate, backtest, and benchmark models using historical catastrophe events and publicly available loss databases. Collaborate with actuarial and product teams to support pricing, structuring, and reinsurance placements. Create a catastrophe loss database for underwriting, regulatory, and capital modeling purposes (e.g., PMLs, AALs, Return Period Losses). Support the development of parametric indices by linking physical event parameters to modeled or observed loss estimates. Prepare technical documentation and contribute to regulatory filings and reinsurance submissions. Act as a Subject Matter Expert (SME) in discussions with reinsurers and technical partners. Required Skills & Tools: Strong background in catastrophe risk modeling, climatology, or geophysical hazard analysis Hands-on experience with CAT modeling tools such as RMS, AIR, CoreLogic, or Oasis LMF Proficiency in Python or R for data analysis and model development Experience with GIS tools (e.g., QGIS, ArcGIS) and working with raster/geospatial datasets Familiarity with reanalysis datasets (e.g., ERA5, MERRA-2) and global hazard databases (e.g., USGS, NOAA IBTrACS, GPM) Deep understanding of event simulation, hazard intensity metrics, vulnerability modeling, and financial loss estimation Experience estimating Probable Maximum Loss (PML) and Average Annual Loss (AAL) Preferred / Good-to-Have Skills: Exposure to open catastrophe modeling platforms (Oasis, CAPRA, etc.) Experience with parametric insurance triggers or index-based products Familiarity with machine learning, Bayesian inference, or ensemble forecasting for hazard modeling SQL/database design experience for modeling data pipelines Knowledge of regulatory frameworks like IFRS 17 or Solvency II Qualifications: Education: Master’s or Ph.D. in any of the following fields: Earth Sciences / Atmospheric Sciences Applied Mathematics / Physics Catastrophe Modelling / Geophysics Actuarial Science (with CAT risk focus) Environmental Engineering / Remote Sensing / Data Science (with geo-hazard specialization) Professional Credentials (Preferred): Certified Catastrophe Risk Analyst (CCRA) Associate or Fellow of IFoA, CAS, IAI with relevant experience GARP SCR Certification Published research or open-source contributions in hazard or catastrophe modeling Experience: 3–5 years in catastrophe modeling or hazard research 7+ years for senior positions at re/insurers, modeling firms, consulting organizations, or national disaster centers Experience with Asia/India-specific perils is a strong plus Why Join Us? Build models from the ground up with full innovation freedom Help shape parametric solutions for high-impact, climate-vulnerable regions Collaborate with leading reinsurers and satellite data partners Competitive compensation, ESOPs, and research-driven culture If you're passionate about using science and technology to solve real-world climate risks, we’d love to hear from you.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Join us as an “ AVP Research " you make the best investment decisions. Our Research vision is best described as “differentiation and integration” – we produce proprietary products, differentiated market analysis and actionable investment ideas to sophisticated investors around the globe, integrated across research disciplines, asset classes and geographies. To be successful as an “ AVP Research ” primary responsibility of the research data scientist is to use alternative data and data science methods to inform financial research collaboratively with finance domain experts. The secondary responsibility is to continue developing our methods and infrastructure for producing this research, and thereby increase our productivity over time. This is mainly done by encapsulating repeatable analysis in software which can be shared with the team. Another secondary responsibility is to work on longer-term projects to improve our capabilities, e.g. by developing neural language models on text, innovating new bias adjustment methods, and similar. You may be assessed on the key critical skills relevant for success in role, Data scientists need strong interpersonal skills. They will work closely with global team members and analysts and will need to act professionally and communicate technical concepts clearly to both technical and non-technical audiences in both written and oral communications. Basic/ Essential Qualifications Collaborate on short (typically a few weeks) research projects for publication on Barclays’ research platform. Onboard new data sets and write software to make them usable. Inform analysis designs, especially with regard to causal and statistical inference. Understand and apply your understanding of selection bias in alternative data sets. Apply ML methods tactically, improving research deliverables without slowing down the research process. Ideate and execute novel methods for longer term projects (typically a few months) with high novelty and potential impact on financial research. Desirable Skillsets/ Good To Have Strong data analysis and ML skills. A basic understanding of data pipelining and automation, with experience using PySpark on large data sets (over 1B data points) and SQL for data extraction. Strong understanding of the application of statistics to research design Strong communication skills, especially if evidenced by past writing (e.g. blog posts, articles, etc.) Strong skills with causal and statistical inference, including observational causal designs. Past experience with large scale text analysis or geolocation data analysis Experience in quantitative finance. This role will be based out of Nirlon Knowledge Park, Mumbai. This role is deemed as a Certified role under the PRA & UK Financial Conduct Authority - Individual Accountabilities Regulations and may require the role holder to hold mandatory regulatory qualifications or the minimum qualifications to meet internal company benchmarks Purpose of the role To produce and deliver Research with differentiated market insights and actionable ideas to Barclays Clients. Accountabilities Analysis of market, sector, corporate and/or economic data to help develop investment theses for your coverage universe to produce best in class Research. Research may range from individual company or sector notes, through to long dated thematic reports. Presentation of Research views to Barclays Clients; this can be through direct, face to face and virtual interactions, Research hosted events and written communications. Engagement with Markets, Client Strategy and other stakeholders, to raise awareness of your Research both to Clients and internally. Prioritise interaction with the most relevant and valuable Clients for your Research. Provision of insights and Research views to internal Clients to help them navigate financial markets and risks. Collaboration with the Supervisory Analyst, Compliance and other stakeholders to ensure Research is produced and delivered to Clients and internal stakeholders in a compliant manner. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Join us as an “ AVP Research " you make the best investment decisions. Our Research vision is best described as “differentiation and integration” – we produce proprietary products, differentiated market analysis and actionable investment ideas to sophisticated investors around the globe, integrated across research disciplines, asset classes and geographies. To be successful as an “ AVP Research ” primary responsibility of the research data scientist is to use alternative data and data science methods to inform financial research collaboratively with finance domain experts. The secondary responsibility is to continue developing our methods and infrastructure for producing this research, and thereby increase our productivity over time. This is mainly done by encapsulating repeatable analysis in software which can be shared with the team. Another secondary responsibility is to work on longer-term projects to improve our capabilities, e.g. by developing neural language models on text, innovating new bias adjustment methods, and similar. You may be assessed on the key critical skills relevant for success in role, Data scientists need strong interpersonal skills. They will work closely with global team members and analysts and will need to act professionally and communicate technical concepts clearly to both technical and non-technical audiences in both written and oral communications. Basic/ Essential Qualifications Collaborate on short (typically a few weeks) research projects for publication on Barclays’ research platform. Onboard new data sets and write software to make them usable. Inform analysis designs, especially with regard to causal and statistical inference. Understand and apply your understanding of selection bias in alternative data sets. Apply ML methods tactically, improving research deliverables without slowing down the research process. Ideate and execute novel methods for longer term projects (typically a few months) with high novelty and potential impact on financial research. Desirable Skillsets/ Good To Have Strong data analysis and ML skills. A basic understanding of data pipelining and automation, with experience using PySpark on large data sets (over 1B data points) and SQL for data extraction. Strong understanding of the application of statistics to research design Strong communication skills, especially if evidenced by past writing (e.g. blog posts, articles, etc.) Strong skills with causal and statistical inference, including observational causal designs. Past experience with large scale text analysis or geolocation data analysis Experience in quantitative finance. This role will be based out of Nirlon Knowledge Park, Mumbai. This role is deemed as a Certified role under the PRA & UK Financial Conduct Authority - Individual Accountabilities Regulations and may require the role holder to hold mandatory regulatory qualifications or the minimum qualifications to meet internal company benchmarks Purpose of the role To produce and deliver Research with differentiated market insights and actionable ideas to Barclays Clients. Accountabilities Analysis of market, sector, corporate and/or economic data to help develop investment theses for your coverage universe to produce best in class Research. Research may range from individual company or sector notes, through to long dated thematic reports. Presentation of Research views to Barclays Clients; this can be through direct, face to face and virtual interactions, Research hosted events and written communications. Engagement with Markets, Client Strategy and other stakeholders, to raise awareness of your Research both to Clients and internally. Prioritise interaction with the most relevant and valuable Clients for your Research. Provision of insights and Research views to internal Clients to help them navigate financial markets and risks. Collaboration with the Supervisory Analyst, Compliance and other stakeholders to ensure Research is produced and delivered to Clients and internal stakeholders in a compliant manner. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window)
Posted 3 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary Job Description Join the exciting Generative AI team at Qualcomm focused on integrating cutting edge GenAI models on Qualcomm chipsets. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of GenAI models on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. Utilize this power efficient hardware and Software stack to run Large Language Models (LLMs) and Large Vision Models (LVM) at near GPU speeds! Responsibilities In this role, you will spearhead the development and commercialization of the Qualcomm AI Runtime (QAIRT) SDK on Qualcomm SoCs. As an AI inferencing expert, you'll push the limits of performance from large models. Your mastery in deploying large C/C++ software stacks using best practices will be essential. You'll stay on the cutting edge of GenAI advancements, understanding LLMs/Transformers and the nuances of edge-based GenAI deployment. Most importantly, your passion for the role of edge in AI's evolution will be your driving force. Requirements Master’s/Bachelor’s degree in computer science or equivalent. 2-4 years of relevant work experience in software development. Strong understanding of Generative AI models – LLM, LVM, LMMs and building blocks (self-attention, cross attention, kv caching etc.) Floating-point, Fixed-point representations and Quantization concepts. Experience with optimizing algorithms for AI hardware accelerators (like CPU/GPU/NPU). Strong in C/C++ programming, Design Patterns and OS concepts. Good scripting skills in Python. Excellent analytical and debugging skills. Good communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Strong understanding of SIMD processor architecture and system design. Proficiency in object-oriented software development and familiarity Familiarity with Linux and Windows environment Strong background in kernel development for SIMD architectures. Familiarity with frameworks like llama.cpp, MLX, and MLC is a plus. Good knowledge of PyTorch, TFLite, and ONNX Runtime is preferred. Experience with parallel computing systems and languages like OpenCL and CUDA is a plus. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3077588
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: • Design and implement predictive models and machine learning algorithms to solve healthcare-specific challenges • Analyze large, complex healthcare datasets including electronic health records (EHR) and claims data • Develop statistical models for patient risk stratification, treatment optimization, population health management, and revenue cycle optimization • Build models for clinical decision support, patient outcome prediction, care quality improvement, and revenue cycle optimization • Create and maintain automated data pipelines for real-time analytics and reporting • Work with healthcare data standards (HL7 FHIR, ICD-10, CPT, SNOMED CT) and ensure regulatory compliance • Develop and deploy models in cloud environments while creating visualizations for stakeholders • Present findings and recommendations to cross-functional teams including clinicians, product managers, and executives Qualifications required: • Bachelor's degree in data science, Statistics, Computer Science, Mathematics, or related quantitative field • At least 2 years of hands-on experience in data science, analytics, or machine learning roles • Demonstrated experience working with large datasets and statistical modeling • Proficiency in Python or R for data analysis and machine learning • Experience with SQL and database management systems • Knowledge of machine learning frameworks such as scikit-learn, TensorFlow, PyTorch • Familiarity with data visualization tools such as Tableau, Power BI, matplotlib, ggplot2 • Experience with version control systems (Git) and collaborative development practices • Strong foundation in statistics, hypothesis testing, and experimental design • Experience with supervised and unsupervised learning techniques • Knowledge of data preprocessing, feature engineering, and model validation • Understanding of A/B testing and causal inference methods. What You’ll Need to Be Successful (Required Skills): • Large Language Model (LLM) Experience: At least 2 years of hands-on experience working with pre-trained language models (GPT, BERT, T5) including fine-tuning, prompt engineering, and model evaluation techniques • Generative AI Frameworks: Proficiency with generative AI libraries and frameworks such as Hugging Face Transformers, Lang Chain, OpenAI API, or similar platforms for building and deploying AI applications • Prompt Engineering and Optimization: Experience designing, testing, and optimizing prompts for various use cases including text generation, summarization, classification, and conversational AI applications • Vector Databases and Embeddings: Knowledge of vector similarity search, embedding models, and vector databases (Pinecone, We aviate, Chroma) for building retrieval-augmented generation (RAG) systems • AI Model Evaluation: Experience with evaluation methodologies for generative models including BLEU scores, ROUGE metrics, human evaluation frameworks, and bias detection techniques • Multi-modal AI Systems: Familiarity with multi-modal generative models combining text, images, and other data types, including experience with vision-language models and cross-modal applications • AI Safety and Alignment: Understanding of responsible AI practices including content filtering, bias mitigation, hallucination detection, and techniques for ensuring AI outputs align with business requirements and ethical guidelines
Posted 3 weeks ago
3.0 years
0 Lacs
Panaji, Goa, India
On-site
About the Project We are seeking a highly skilled and pragmatic AI/ML Engineer to join the team building "a Stealth Prop-tech startup," a groundbreaking digital real estate platform in Dubai. This is a complex initiative to build a comprehensive ecosystem integrating long-term sales, short-term stays, and advanced technologies including AI/ML, data analytics, Web3/blockchain, and conversational AI. You will be responsible for operationalizing the machine learning models that power our most innovative features, ensuring they are scalable, reliable, and performant. This is a crucial engineering role in a high-impact project, offering the chance to build the production infrastructure for cutting-edge AI in the PropTech space. Job Summary As an AI/ML Engineer, you will bridge the gap between data science and software engineering. You will be responsible for taking the models developed by our data scientists and deploying them into our production environment. Your work will involve building robust data pipelines, creating scalable training and inference systems, and developing the MLOps infrastructure to monitor and maintain our models. You will collaborate closely with data scientists, backend developers, and product managers to ensure our AI-driven features are delivered efficiently and reliably to our users. Key Responsibilities Design, build, and maintain scalable infrastructure for training and deploying machine learning models at scale. Operationalize ML models, including the "TruValue UAE" AVM and the property recommendation engine, by creating robust, low-latency APIs for production use. Develop and manage data pipelines (ETL) to feed our machine learning models with clean, reliable data for both training and real-time inference. Implement and manage the MLOps lifecycle, including CI/CD for models, versioning, monitoring for model drift, and automated retraining. Optimize the performance of machine learning models for speed and cost-efficiency in a cloud environment. Collaborate with backend engineers to seamlessly integrate ML services with the core platform architecture. Work with data scientists to understand model requirements and provide engineering expertise to improve model efficacy and feasibility. Build the technical backend for the AI-powered chatbot, integrating it with NLP services and the core platform data. Required Skills and Experience 3-5+ years of experience in a Software Engineering, Machine Learning Engineering, or related role. A Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Strong software engineering fundamentals with expert proficiency in Python. Proven experience deploying machine learning models into a production environment on a major cloud platform (AWS, Google Cloud, or Azure). Hands-on experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Experience building and managing data pipelines using tools like Apache Airflow, Kubeflow Pipelines, or cloud-native solutions. Collaborate with cross-functional teams to integrate AI solutions into products. Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker) and orchestration (Kubernetes). Preferred Qualifications Experience in the PropTech (Property Technology) or FinTech sectors is highly desirable. Direct experience with MLOps tools and platforms (e.g., MLflow, Kubeflow, AWS SageMaker, Google AI Platform). Familiarity with big data technologies (e.g., Spark, BigQuery, Redshift). Experience building real-time machine learning inference systems. Strong understanding of microservices architecture. Experience working in a collaborative environment with data scientists.
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Prismforce\ Prismforce is a Vertical SaaS company revolutionizing the Talent Supply Chain for global Technology, R&D/Engineering, and IT Services companies. Our AI-powered product suite enhances business performance by enabling operational flexibility, accelerating decision-making, and boosting profitability. Our mission is to become the leading industry cloud/SaaS platform for tech services and talent organizations worldwide.\ Job Description Role: Data Scientist Reporting to: Lead AI/ML Location: Mumbai/Bangalore/Pune/Kolkata Job Brief We are looking for Data Scientists to build data products to be the core of SAAS company disrupting the Skill market. You would be required to create and participate in evolving Analytical culture of the organization, experiment with the existing Analytical Techniques, Improvise on existing algorithms to solve the problems at hand, Innovate new algorithms to disrupt the industry, and play the critical role in solving the problems at hand. Responsibilities Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams Requirements Bachelor's degree in a highly quantitative field (e.g., Computer Science, Engineering, Physics, Math, Operations Research, etc.) or equivalent experience. Extensive machine learning and algorithmic background with deep expertise in Gen AI and Natural Language Processing (NLP) techniques, along with a strong understanding of supervised and unsupervised learning methods, reinforcement learning, deep learning, Bayesian inference, and network graph analysis. Advanced knowledge of NLP methods, including text generation, sentiment analysis, named entity recognition, and language modelling. Strong math skills, including proficiency in statistics, linear algebra, and probability, with the ability to apply these concepts in Gen AI and NLP solutions. Proven problem-solving aptitude with the ability to apply NLP and Gen AI tools to real-world business challenges. Excellent communication skills with the ability to translate complex technical information, especially related to Gen AI and NLP, into clear insights for non-technical stakeholders. Fluency in at least one data science/analytics programming language (e.g., Python, R, Julia), with expertise in NLP and Gen AI libraries like TensorFlow, PyTorch, Hugging Face, or OpenAI tools. Start-up experience is a plus, with ideally 5-8 years of advanced analytics experience in startups or marquee companies, particularly in roles leveraging Gen AI and NLP for product or business innovations. Required Skills Machine Learning, Deep Learning, Algorithms, Computer Science, Engineering, Operations Research, Math Skills, Communication Skills, SAAS Product, IT Services, Artificial Intelligence, ERP, Product Management, Automation, Analytical Models, Predictive Models, NLP, Forecasting Models, Product Development, Leadership, Problem Solving, Unsupervised Learning, Reinforcement Learning, Natural Language Processing, Algebra, Data Science, Programming Language, Python, Julia. What Makes Us Unique First-Mover Advantage: We are the only Vertical SaaS product company addressing Talent Supply Chain challenges in the IT services industry. Innovative Product Suite: Our solutions offer forward-thinking features that outshine traditional ERP systems. Strategic Expertise: Guided by an advisory board of ex-CXOs from top global IT firms, providing unmatched industry insights. Experienced Leadership: Our founding team brings deep expertise from leading firms like McKinsey, Deloitte, Amazon, Infosys, TCS, and Uber. Diverse and Growing Team: We have grown to 160+ employees across India, with hubs in Mumbai, Pune, Bangalore, and Kolkata. Strong Financial Backing: Series A-funded by Sequoia, with global IT companies using our product as a core solution. Why Join Prismforce Competitive Compensation: We offer an attractive salary and benefits package that rewards your contributions. Innovative Projects: Work on pioneering projects with cutting-edge technologies transforming the Talent Supply Chain. Collaborative Environment: Thrive in a dynamic, inclusive culture that values teamwork and innovation. Growth Opportunities: Continuous learning and development are core to our philosophy, helping you advance your career. Flexible Work: Enjoy flexible work arrangements that balance your work-life needs. By joining Prismforce, you'll become part of a rapidly expanding, innovative company that's reshaping the future of tech services and talent management. Perks & Benefits Work with the best in the industry: Work with a high-pedigree leadership team that will challenge you, build on your strengths and invest in your personal development Insurance Coverage-Group Mediclaim cover for self,spouse,kids and parents & Group Term Life Insurance Policy for self. Flexible Policies Retiral Benefits Hybrid Work Model Self-driven career progression tool
Posted 3 weeks ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title: AI Lead – Generative AI & ML Systems Key Responsibilities Generative AI Development Design and implement LLM-powered solutions and generative AI models for use cases such as predictive analytics, automation workflows, anomaly detection, and intelligent systems. · RAG & LLM Applications Build and deploy Retrieval-Augmented Generation (RAG) pipelines, structured generation systems, and chat-based assistants tailored to business operations. Full AI Lifecycle Management Lead the complete AI lifecycle—from data ingestion and preprocessing to model design, training, testing, deployment, and continuous monitoring. · Optimization & Scalability Develop high-performance AI/LLM inference pipelines, applying techniques like quantization, pruning, batching, and model distillation to support real-time and memory-constrained environments. MLOps & CI/CD Automation Automate training and deployment workflows using Terraform, GitLab CI, GitHub Actions, or Jenkins, integrating model versioning, drift detection, and compliance monitoring. Cloud & Deployment Deploy and manage AI solutions using AWS, Azure, or GCP with containerization tools like Docker and Kubernetes. AI Governance & Compliance Ensure model/data governance and adherence to regulatory and ethical standards in production AI deployments. Stakeholder Collaboration Work cross-functionally with product managers, data scientists, and engineering teams to align AI outputs with real-world business goals. Required Skills & Qualifications Bachelor’s degree (B.Tech or higher) in Computer Science, IT, or a related field is required. 8-12 Year exp- from the Ai team with overall experience in Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) solution development. Minimum 2+ years of hands-on experience in Generative AI and LLM-based solutions, including prompt engineering, fine-tuning, Retrieval-Augmented Generation (RAG) pipelines with full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Proven expertise in both open-source and proprietary Large Language Models (LLMs), including LLaMA, Mistral, Qwen, GPT, Claude, and BERT. Expertise in C/C++ & Python programming with relevant ML/DL libraries including TensorFlow, PyTorch, and Hugging Face Transformers. Experience deploying scalable AI systems in containerized environments using Docker and Kubernetes. Deep understanding of the MLOps/LLMOps lifecycle, including model versioning, deployment automation, performance monitoring, and drift detection. Familiarity with CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins) and DevOps for ML workflows. Working knowledge of Infrastructure-as-Code (IaC) tools like Terraform for cloud resource provisioning and reproducible ML pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Designed and documented High-Level Design (HLD) and Low-Level Design (LLD) for ML/GenAI systems, covering data pipelines, model serving, vector search, and observability layers. Documentation included component diagrams, network architecture, CI/CD workflows, and tabulated system designs. Provisioned and managed ML infrastructure using Terraform, including compute clusters, vector databases, and LLM inference endpoints across AWS, GCP, and Azure. Experience beyond notebooks: shipped models with logging, tracing, rollback mechanisms, and cost control strategies. Hands-on ownership of production-grade LLM workflows, not limited to experimentation. Full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Preferred Qualifications (Good To Have) Experience with LangChain, LlamaIndex, AutoGen, CrewAI, OpenAI APIs, or building modular LLM agent workflows. Exposure to multi-agent orchestration, tool-augmented reasoning, or Autonomous AI agents and agentic communication patterns with orchestration. Experience deploying ML/GenAI systems in regulated environments, with established governance, compliance, and Responsible AI frameworks. Familiarity with AWS data and machine learning services, including Amazon SageMaker, AWS Bedrock, ECS/EKS, and AWS Glue, for building scalable, secure data pipelines and deploying end-to-end AI/ML workflows.
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Applied Research Center [Emerging Areas] Advanced AI [SLM, Inference Scaling, Synthetic Data, Distributed Learning, Agentic AI, ANI] New Interaction Models [Spatial computing, Mixed Reality, 3D visualizations, New Experiences] Platforms and Protocols [Architecting and engineering for Performance, Uptime, Low-latency, Scalability, Efficiency, Data, Interoperability and Low cost, Beckn, CDPI] Cybersecurity [Ethical hacking, Threat Mgmt, Supply chain security & risk, Cyber Resilience] Quantum [Quantum AI, Stack, Simulation & Optimization, Cryptography, Valued use cases] Autonomous Machines [Humanoids, Industrial Robots, Drones, Smart Products] Emerging Research [Brain, AGI, Space, Semicon ] Emerging Tech Trends Research - Research on emerging tech trends, ecosystem of players, use cases and their applicability and impact to client businesses. Scan & curate startups, universities and tech partnerships needed and create innovation ecosystem. Rapidly design and develop PoCs in Emerging tech areas. Share design specifications with other team members, get the components developed, integrate and test. Build reusable components and develop PoCs using relevant startups and Open-source solutions. 2. Thought Leadership - Develop showcases that demonstrate how emerging technologies can be applied in a business context, demo scenarios for the IP. Contribute towards patents, tier-1 publications, whitepapers, blogs in the relevant emerging tech area Get certified on the emerging technology, frameworks 3. Applied Research Center Activities - Contribute to high level design development, testing and implementation of new proof of concepts in emerging tech areas. 4. Problem Definition, Requirements - Understand technical requirements and define detailed design. Analyze the reusable components to map the given requirement to existing implementation and identify needs for enhancements 5. IP Development - Develop program level design, modular components to implement the proposed design. Design and develop reusable components. Ensure compliance with coding standards, secure coding, KM guidelines while developing the IP 6. Innovation Consulting - Understand client requirements and implement first of kind solutions using emerging tech expertise. Customize and extend IP for client specific features 7. Talent Management - Mentor the team and help them acquire the identified emerging tech skill. Participate in demo sessions, hackathons 8. Emerging Tech Startup Ecosystem – Work with startups in providing innovative solutions to client problems and augmenting Infosys offerings Technical Competencies Advanced theoretical knowledge in specific domain Experimental design and methodology expertise Data analysis and interpretation skills Prototype development capabilities Research tool proficiency relevant to domain Soft Skills and Attributes Collaborative mindset for cross-disciplinary research Communication skills for knowledge dissemination Creative problem-solving approach Intellectual curiosity and innovation focus Commercial awareness for translational research
Posted 3 weeks ago
8.0 years
3 - 4 Lacs
Hyderābād
On-site
About Providence Providence, one of the US’s largest not-for-profit healthcare systems, is committed to high quality, compassionate healthcare for all. Driven by the belief that health is a human right and the vision, ‘Health for a better world’, Providence and its 121,000 caregivers strive to provide everyone access to affordable quality care and services. Providence has a network of 51 hospitals, 1,000+ care clinics, senior services, supportive housing, and other health and educational services in the US. Providence India is bringing to fruition the transformational shift of the healthcare ecosystem to Health 2.0. The India center will have focused efforts around healthcare technology and innovation, and play a vital role in driving digital transformation of health systems for improved patient outcomes and experiences, caregiver efficiency, and running the business of Providence at scale. Why Us? Best In-class Benefits Inclusive Leadership Reimagining Healthcare Competitive Pay Supportive Reporting Relation How is this team contributing to vision of Providence? Marketing Analytics team empowers Marketing and Digital Experience (MDeX) team with actionable, data-driven insights and measurement tools that drive impactful decisions, identify business opportunities, maximize performance, and create a competitive advantage. We achieve this by understanding our data and business/market context, partnering with MDeX to enhance the use of analytical tools, delivering timely and accurate reports and insights, and telling compelling stories about our patients, business, and experiences through advanced data storytelling, including visualization. What will you be responsible for? As a Principal Data Scientist, you will be responsible to develop effective and high-quality healthcare program integrity analytics that meet business requirements. In Addition, your responsibilities include: Collaborate with stakeholders to develop models that inform marketing strategies, audience targeting, and channel prioritization. Engage in product features for marketing technologies. Possess strong Strategic Thought Leadership, Innovative Mindset, Communication / Collaboration, Story-telling, Critical Thinking, Problem Solving skills. Experience working with Gen-AI, expertise in fine-tuning transformer-based models / LLMs- GPT, Llama, PaLM, BERT and RAG models. Fine-tuning of Large Language Models (GPT/ PaLM/ Llama) to meet specific business requirements. Performs advanced statistical analyses to identify patterns and trends and opportunity assessments to assist in delivering optimal Marketing Investments and decision making. Build and validate predictive models with advanced machine learning techniques and tools to drive business value, interpreting, and presenting modeling and analytical results to technical and business stakeholders. Develop LLM solutions on customer data such as RAG architectures on enterprise knowledge repos, querying structured data with natural language, and content generation Build, scale, and optimize customer data science workloads and apply best in class MLOps to productionize these workloads across a variety of domains Advise data teams on various data science such as architecture, tooling, and best practices Strong Communicator, should be able to collaborate cross-functionally with the Strategy, Product and engineering teams to define priorities and influence the product roadmap Designs data visualizations and determines the best way to present data in a clear understandable format using reports, drilldowns, tables, gauges, graphs, charts, and other intuitive graphical add-ons. Develop and implement machine learning models using a variety of techniques (supervised and unsupervised learning models including NLP, Deep learning Models, and Predictive Analytics) Ensure accuracy of data and deliverables of reporting employees with comprehensive policies and processes. Manage and optimize processes for data intake, validation, mining, and engineering as well as modelling, visualization, and communication deliverables. Prepares and delivers results to leadership with analytic insights, interpretations, and recommendations. Understanding data storage and data sharing methods. Healthcare and marketing domain business knowledge will be a plus. Strong proficiency in: Python, PySpark, SQL, & have experience in machine learning libraries & frameworks such as TensorFlow, PyTorch, or Keras. Deep expertise in traditional as well as modern statistical & ML techniques like regression, support vector machines, Regularization, Boosting, Random Forests, XGBoost, & other ensemble methods. Proficiency in developing NLP models using: Nltk, spacy, Genism , Word 2 Vec , Seq 2 seq , transformers , BERT etc. Prior hands-on experience in analysing large and complex data sets, data reliability analysis, quality controls and data processing, with focus on model validation practices. What would your week look like? Responsible for end-to-end ownership of data science use cases right from outlining the business problem, to exploring various solutions to solve the business problem, to building, deploying, and evaluating the solution to yield high business value and customer satisfaction. Who are we looking for? 14+ yrs. of professional work experience preferable in management consulting or high growth start-ups preferably in healthcare and 8-12+ years of experience in a data analytics and data science role. Bachelor's degree in mathematics, statistics, healthcare administration, or related field. Master's degree advantageous. 5+ years of experience in Python, AI & ML Designing, developing, and implementing AI/ generative AI models & algorithms to solve complex problems and drive innovation across organization. Lead all stages of AI/ML solutions implementation: Gathering business requirements & understanding, data requirements for the solution build, any constraints (data /business), data exploration/solution design, machine learning models development, active collaboration with model risk team to ensure high quality model deployment & minimize enterprise risk. Lead the implementation of AI solutions to deliver business impact with focus on value, success criteria alignment, scalability, and operationalization. Collaborating with cross-functional teams to define project requirements and objectives, ensuring alignment with overall business goals for integration, sign-off and deploying machine learning models into production. Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. Engage team members, project managers & business stakeholders in the analysis and interpretation of experimentation results & ensuring feedback is incorporated as appropriate into models. Drive best practices throughout development process and publish learnings/feedback for continuous learning. Lead/drive and accelerate innovations in discovery phase via insights, frameworks, causal inference solutions and machine learning prototypes via POCs. Refine standards and processes for AI solution development & implementation in close collaboration with data science leaders and team in the US. Ensure adherence to the industry / enterprise standards and best practices. Develop and institutionalize best practices and re-usable components, contribute to research and experimentation efforts. Lead, coach, support, and mentor data scientists in the team review their work as required, provide adequate guidance, feedback to help them achieve their goals and do right for Enterprise. Participate in talent acquisition activities to build strong talent pool of Data Scientists. Providence’s vision to create ‘Health for a Better World’ aids us to provide a fair and equitable workplace for all in our employment, whether temporary, part-time or full time, and to promote individuality and diversity of thought and background, and acknowledge its role in the organization’s success. This makes us committed towards equal employment opportunities, regardless of race, religion or belief, color, ancestry, disability, marital status, gender, sexual orientation, age, nationality, ethnic origin, pregnancy, or related needs, mental or sensory disability, HIV Status, or any other category protected by applicable law. In furtherance to our mission in building a more inclusive and equitable environment, we shall, from time to time, undertake programs to assist, uplift and empower underrepresented groups including but not limited to Women, PWD (Persons with Disabilities), LGTBQ+ (Lesbian, Gay, Transgender, Bisexual or Queer), Veterans and others. We strive to address all forms of discrimination or harassment and provide a safe and confidential process to report any misconduct. Contact our Integrity hotline also, read our Code of Conduct.
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the Team The Data Science team at Navi plays a central role in building intelligent solutions that power our products and drive business impact. We work across key domains such as Lending, Collections, KYC, UPI Growth and Ads recommendation, applying advanced machine learning/deep learning/GenAI techniques to solve high-impact challenges. Our work involves a diverse range of data types—including text, image, and tabular data—and we closely collaborate with cross-functional teams like Product, Business, Engineering, etc. to deliver end-to-end solutions. About the Role As a Data Scientist 3 at Navi, you’ll be an integral part of a team that’s building scalable and efficient solutions across lending, insurance, investments, and UPI. You won’t just be solving predefined problems - you’ll identify the problems and solve for getting high business impact. This role offers the opportunity to lead high-impact data science initiatives that directly influence strategic business decisions. It involves working at the intersection of advanced machine learning, large-scale data processing, and cross-functional collaboration, with a focus on building scalable solutions and fostering a culture of technical excellence and innovation within the team. What We Expect From You Strategic Project Leadership: Drive end-to-end delivery of high-impact data science initiatives, from conception and prototyping to large-scale deployment, ensuring alignment with business goals and measurable outcomes. Stakeholder Management & Cross-Functional Leadership: Proactively engage and influence stakeholders across Product, Engineering, Business, and Executive teams, shaping problem definitions and championing data-driven decision making. Technical Depth & Thought Leadership : Demonstrate deep expertise across machine learning and deep learning concepts, setting technical direction and solving highly complex, ambiguous problems. Advanced Data Solutions: Architect and deliver solutions across diverse data types—tabular, text, image, and speech—leveraging cutting-edge models and best practices to drive business value at scale. Innovation & Research: Lead experimentation with emerging algorithms and technologies, fostering a culture of innovation and staying ahead of industry trends, especially in the fintech domain. Robust Model Governance: Establish and uphold best practices for model validation, monitoring, performance tracking, and risk management in production environments, ensuring reliability and compliance. Mentorship & Team Development: Mentor and guide junior team members, facilitate technical reviews, and actively contribute to a culture of knowledge sharing, continuous learning, and technical excellence within the team. Must Haves Bachelor’s or higher degree in Computer Science, Electrical Engineering, or a quantitative field, with a demonstrated research mindset and 5+ years of relevant experience Deep understanding of modern machine learning techniques, underlying mathematics , and hands-on experience with key ML frameworks (such as scikit-learn, Keras, TensorFlow, PySpark, MLlib, etc.) Strong foundation in statistics, machine learning (e.g., Random Forests, XGBoost, SVMs), and inference techniques, with expertise in hypothesis testing, simulations, and optimization methodologies. Proven experience working with distributed computing platforms for large-scale data processing and model training Strong programming skills, preferably in Python , Scala, or similar languages Demonstrated ability to drive large-scale machine learning projects end to end—from problem formulation and data preparation to deployment and monitoring Proven track record of delivering high business impact through data science solutions in previous roles Experience leading projects with multiple team members and/or mentoring junior colleagues is preferred Inside Navi We are shaping the future of financial services for a billion Indians through products that are simple, accessible, and affordable. From Personal & Home Loans to UPI, Insurance, Mutual Funds, and Gold — we’re building tech-first solutions that work at scale, with a strong customer-first approach. Founded by Sachin Bansal & Ankit Agarwal in 2018, we are one of India’s fastest-growing financial services organisations. But we’re just getting started! Our Culture The Navi DNA Ambition. Perseverance. Self-awareness. Ownership. Integrity. We’re looking for people who dream big when it comes to innovation. At Navi, you’ll be empowered with the right mechanisms to work in a dynamic team that builds and improves innovative solutions. If you’re driven to deliver real value to customers, no matter the challenge, this is the place for you. We chase excellence by uplifting each other—and that starts with every one of us. Why You'll Thrive at Navi At Navi, it’s about how you think, build, and grow. You’ll thrive here if: You’re impact-driven : You take ownership, build boldly, and care about making a real difference. You strive for excellence : Good isn’t good enough. You bring focus, precision, and a passion for quality. You embrace change : You adapt quickly, move fast, and always put the customer first.
Posted 3 weeks ago
0 years
4 - 9 Lacs
Chennai
On-site
Job Summary Strategic & Leadership-Level GenAI Skills 1. AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). 2. Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluatio Responsibilities Strategic & Leadership-Level GenAI Skills 1. AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). 2. Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluation Harness OpenLLM Leaderboard etc. 3. Enterprise-Grade RAG Systems Designing Retrieval-Augmented Generation pipelines. Using vector databases (Pinecone Weaviate Qdrant) with LangChain or LlamaIndex. Optimizing chunking embedding strategies and retrieval quality. 4. Security Privacy & Governance Implementing data privacy access control and audit logging. Understanding risks: prompt injection data leakage model misuse. Aligning with frameworks like NIST AI RMF EU AI Act or ISO/IEC 42001. 5. Cost Optimization & Monitoring Estimating and managing GenAI inference costs. Using observability tools (e.g. Arize WhyLabs PromptLayer). Token usage tracking and prompt optimization. Advanced Technical Skills 6. Model Fine-Tuning & Distillation Fine-tuning open-source models using PEFT LoRA QLoRA. Knowledge distillation for smaller faster models. Using tools like Hugging Face Axolotl or DeepSpeed. 7. Multi-Agent Systems Designing agent workflows (e.g. AutoGen CrewAI LangGraph). Task decomposition memory and tool orchestration. 8. Toolformer & Function Calling Integrating LLMs with external tools APIs and databases. Designing tool-use schemas and managing tool routing. Team & Product Leadership 9. GenAI Product Thinking Identifying use cases with high ROI. Balancing feasibility desirability and viability. Leading GenAI PoCs and MVPs. 10. Mentoring & Upskilling Teams Training developers on prompt engineering LangChain etc. Establishing GenAI best practices and code reviews. Leading internal hackathons or innovation sprints.
Posted 3 weeks ago
12.0 years
4 - 4 Lacs
Srīperumbūdūr
On-site
Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. Job Summary To support our extraordinary teams who build great products and contribute to our growth, we’re looking to add a Manager - Warehouse position will be based in Sriperumbadur, Chennai. What a typical day looks like: Maintains receiving, warehousing, and distribution operations by initiating, coordinating, and enforcing program, operational, and personnel policies and procedures. Complies with local warehousing, material handling, and shipping requirements by studying existing and new legislation; enforcing adherence to requirements; advising management on needed actions. Safeguards warehouse operations and contents by establishing and monitoring security procedures and protocols. Controls inventory levels by conducting physical counts; reconciling with data storage system. Maintains physical condition of warehouse by planning and implementing new design layouts; inspecting equipment; issuing work orders for repair and requisitions for replacement. Achieves financial objectives by preparing an annual budget; scheduling expenditures; analyzing variances; initiating corrective actions. Monitors volume of business and determines appropriate work schedule. Sets productivity and establishes necessary controls to ensure objectives are met. Monitors condition and maintenance of operating / material handling equipment. Constantly monitors the efficient handling of shipments and receipts to create an error free culture. Reviews the loading and unloading schedules to maximize efficiencies and reduce expenses. Coordinate floor cycle counts, physical inventory and reconciling records. The experience we’re looking to add to our team: Typically requires a Bachelor's degree or equivalent experience and extensive knowledge of purchasing policies and practices in addition to 12 + years of materials experience with advanced experience using MRP systems. Ability to work with mathematical concepts such as probability and statistical inference, and fundamentals of plane and solid geometry and trigonometry. Ability to apply concepts such as fractions, percentages, ratios and proportions to practical situations. Demonstrates expert functional, technical and people and/or process management skills as well as customer (external and internal) relationship skills. Demonstrates detailed expertise in very complex functional/technical area or broad breadth of knowledge in multiple areas; understands the strategic impact of the function across sites. Ability to effective present information to management and customers. Master’s degree preferred Here are a few examples of what you’ll get for the great work you provide Health Insurance PTO #LP17 Job Category Global Procurement & Supply Chain Required Skills: Optional Skills: Logistics, Warehouse ManagementFlex pays for all costs associated with the application, interview or offer process, a candidate will not be asked for any payment related to these costs. Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. We do not discriminate based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status, or any other status protected by law. We're happy to provide reasonable accommodations to those with a disability for assistance in the application process. Please email accessibility@flex.com and we'll discuss your specific situation and next steps (NOTE: this email does not accept or consider resumes or applications. This is only for disability assistance. To be considered for a position at Flex, you must complete the application process first).
Posted 3 weeks ago
0 years
1 - 3 Lacs
Ghaziabad
Remote
Company Overview We are an innovative company developing license-based AI vision software for industrial automation and robotics, deployed on edge devices like the ARM64 boards. Our solutions leverage state-of-the-art object detection models (e.g., YOLOv8) to deliver cutting-edge vision capabilities for industries such as manufacturing, logistics, and robotics. Job Summary We are seeking a motivated .NET Developer Intern with full-stack development skills and an interest in AI vision software for industrial applications. The intern will contribute to the development, deployment, and licensing of AI vision software running on ARM64 boards, focusing on object detection using models like YOLOv8. The role involves building secure, license-protected applications using C#/.NET, integrating with Python-based AI models, and ensuring compatibility with Linux and ARM64 architectures. Responsibilities Develop full-stack applications using C#/.NET (ASP.NET Core, Blazor, or WPF) for AI vision software, including web-based dashboards and APIs for industrial use cases. Implement and integrate license-based software locking mechanisms (e.g., hardware-tied, time-based, or feature-based licenses) using .NET libraries like Portable.Licensing or similar tools. Collaborate with the AI team to integrate YOLOv8 object detection models (developed in Python) into .NET applications using ONNX Runtime or ML.NET for inference on ARM64. Optimize software for deployment on Linux-based ARM64 platforms, ensuring compatibility with ARM64. Assist in developing user interfaces for industrial vision applications, ensuring seamless interaction with object detection outputs. Write clean, maintainable, and well-documented code following best practices. Participate in testing, debugging, and performance optimization of AI vision software. Stay updated on industry trends in industrial vision and robotics, including advancements from companies like SICK AG, Cognex, and AI vision startups. Required Qualifications Education : Pursuing a Bachelor’s/Master’s degree in Computer Science, Software Engineering, or a related field. Programming Skills : Proficiency in C# and .NET (preferably .NET Core/.NET 5/6/7) for full-stack development. Experience with Python for AI model integration (e.g., PyTorch, YOLOv8). Familiarity with web development technologies (HTML, CSS, JavaScript, Blazor, or ASP.NET Core). AI Vision Knowledge : Basic understanding of computer vision concepts (e.g., object detection, image processing) and familiarity with models like YOLOv8. Licensing Knowledge : Understanding of software licensing mechanisms (e.g., hardware-based, time-based) and experience with cryptographic libraries (e.g., System.Security.Cryptography in .NET or Python’s cryptography). Linux and ARM64 : Basic knowledge of Linux environments and ARM64 architecture, particularly for deploying software on NVIDIA ecosystem. Problem-Solving : Strong analytical and problem-solving skills with a keen attention to detail. Teamwork : Ability to work collaboratively in a fast-paced, cross-functional team environment. Good-to-Have Qualifications Experience with C++ for performance-critical tasks, such as optimizing YOLOv8 inference with NVIDIA TensorRT or OpenCV. Familiarity with NVIDIA’s ecosystem (e.g., JetPack SDK, DeepStream, TensorRT) for AI vision deployment. Knowledge of computer vision libraries like OpenCV or EmguCV. Experience with cloud platforms (e.g., Azure, AWS) for license management or data integration. Understanding of industrial vision and robotics applications. Exposure to software obfuscation tools (e.g., Dotfuscator, Pyarmor) for protecting license-based software. Why Join Us? Gain hands-on experience in developing cutting-edge AI vision software for industrial applications. Work on real-world projects involving YOLOv8, ARM64, and license-based software solutions. Collaborate with a team of experts in AI, computer vision, and industrial automation. Opportunity to learn from industry trends, including those set by leaders like SICK AG, Cognex, and innovative startups. Flexible work environment with opportunities for growth and learning. Job Types: Full-time, Part-time, Permanent, Fresher, Internship, Volunteer Pay: ₹8,393.49 - ₹25,634.39 per month Benefits: Paid sick time Work from home Location Type: In-person Schedule: Day shift Evening shift Monday to Friday Rotational shift Work Location: In person Speak with the employer +91 8448116056
Posted 3 weeks ago
6.0 years
60 - 65 Lacs
India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MLOps, Python, Scalability, VectorDBs, FAISS, Pinecone/ Weaviate/ FAISS/ ChromaDB, Elasticsearch, Open search Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the Team The Data Science team at Navi plays a central role in building intelligent solutions that power our products and drive business impact. We work across key domains such as Lending, Collections, KYC, UPI Growth and Ads recommendation, applying advanced machine learning/deep learning/GenAI techniques to solve high-impact challenges. Our work involves a diverse range of data types—including text, image, and tabular data—and we closely collaborate with cross-functional teams like Product, Business, Engineering, etc. to deliver end-to-end solutions. About the Role As a Data Scientist 2 at Navi, you’ll be an integral part of a team that’s building scalable and efficient solutions across lending, insurance, investments, and UPI. You won’t just be solving predefined problems - you’ll help define them, working hands-on across a variety of domains. In this role, you would be expected to lead projects and create real business impact for Navi. You’ll have the opportunity to apply cutting-edge techniques to real-world challenges, while collaborating closely with cross-functional teams to deliver measurable business impact. This isn’t just a role - it’s a chance to contribute to the future of fintech through innovative, high-ownership work that makes a visible difference. What We Expect From You Design, develop, and deploy end-to-end data science solutions that address complex business problems across lending, insurance, investments, and payments. Collaborate with cross-functional teams including product, engineering , and business to identify opportunities for data-driven impact. Work with diverse data modalities such as tabular data, text, audio, image, and video to build predictive models and intelligent systems. Continuously explore and implement state-of-the-art techniques in machine learning, deep learning, NLP, computer vision, and Generative AI. Drive experimentation and rapid prototyping to validate hypotheses and scale successful models to production. Monitor, evaluate, and refine model performance over time, ensuring reliability and alignment with business goals. Contribute to building a strong data science culture by sharing best practices, mentoring peers, and actively participating in knowledge-sharing sessions. Must Haves Bachelor's or Master's in Engineering or equivalent. 2+ years of Data Science/Machine Learning experience. Strong knowledge in statistics, tree-based techniques (e.g., Random Forests, XGBoost), machine learning (e.g., MLP, SVM), inference, hypothesis testing, simulations, and optimizations. Bonus: Experience with deep learning techniques. Strong Python programming skills and experience in building Data Pipelines in PySpark , along with feature engineering. Proficiency in pandas, scikit-learn, Scala, SQL, and familiarity with TensorFlow/PyTorch . Understanding of DevOps/MLOps, including creating Docker containers and deploying to production (using platforms like Databricks or Kubernetes). Inside Navi We are shaping the future of financial services for a billion Indians through products that are simple, accessible, and affordable. From Personal & Home Loans to UPI, Insurance, Mutual Funds, and Gold — we’re building tech-first solutions that work at scale, with a strong customer-first approach. Founded by Sachin Bansal & Ankit Agarwal in 2018, we are one of India’s fastest-growing financial services organisations. But we’re just getting started! Our Culture The Navi DNA Ambition. Perseverance. Self-awareness. Ownership. Integrity. We’re looking for people who dream big when it comes to innovation. At Navi, you’ll be empowered with the right mechanisms to work in a dynamic team that builds and improves innovative solutions. If you’re driven to deliver real value to customers, no matter the challenge, this is the place for you. We chase excellence by uplifting each other—and that starts with every one of us. Why You'll Thrive at Navi At Navi, it’s about how you think, build, and grow. You’ll thrive here if: You’re impact-driven : You take ownership, build boldly, and care about making a real difference. You strive for excellence : Good isn’t good enough. You bring focus, precision, and a passion for quality. You embrace change : You adapt quickly, move fast, and always put the customer first.
Posted 3 weeks ago
4.0 - 5.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Description We are looking for an Arabic speaking GenAI Specialist to join our global team. Responsibilities Develop Next-Gen Generative AI Models: Research, design, and fine-tune state-of-the-art models like Llama, GPT, Stable Diffusion, and VAEs to drive business impact. Enhance AI Capabilities: Implement and optimize LLM fine-tuning, prompt engineering, and AI safety mechanisms to improve model efficiency and usability. Integrate AI into Products/ Services: Work closely with business and engineers to embed generative AI solutions into real-world applications, ensuring seamless user experiences. Optimize for Scale & Performance: Build and refine scalable ML pipelines for efficient model deployment, inference, and real-time adaptation. Stay Ahead of AI Trends: Keep pace with emerging technologies, model architectures, and industry breakthroughs to drive continuous innovation. Communicate complex AI concepts to technical and non-technical audiences, ensuring alignment with business strategies. Key Skills required At least 4-5 years of experience in Data Science domain with strong exposure to Generative AI. Proficiency in Arabic with strong speaking and writing skills. Experience using Machine and Deep Learning languages, preferably Python and R, to manipulate data and draw insights from large data sets. Hands-on experience with LLMs, multimodal AI, and diffusion models for applications in text, image, or speech generation. Strong proficiency in TensorFlow, PyTorch, Hugging Face Transformers, and OpenAI APIs and custom fine tuning of LLMs and AIops toolkits. Familiarity with cloud platforms (AWS, GCP, Azure), model deployment strategies, and GPU optimization. Experience building data pipelines and data centric applications using distributed storage platforms in a production setting Ability to work with large-scale datasets, vector embeddings, and retrieval-based AI systems.
Posted 3 weeks ago
7.0 years
0 Lacs
Delhi, India
Remote
About HighLevel: HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, from marketing agencies to entrepreneurs to small businesses and beyond. Our platform empowers users across industries to streamline operations, drive growth, and crush their goals. HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages 470 terabytes of data distributed across five databases, operates with a network of over 250 micro-services, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact Every month, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. Learn more about us on our YouTube Channel or Blog Posts About the Role: We're seeking a Staff Engineer to lead the development of LLM-powered AI agents and next-gen Generative AI systems that drive core product functionality and customer-facing automation at scale. This is a high-autonomy, high-impact role for someone who thrives at the intersection of applied AI, agent design, and core data science. You'll build foundational models, retrieval systems, and dynamic agents that interact with millions of users — powering personalized communication, intelligent scheduling, smart replies, and much more. We are looking for builders who can take projects from research and experimentation to production and iteration, and who bring strong data science rigour alongside hands-on GenAI experience. Roles & Responsibilities Architect and deploy autonomous AI agents that execute workflows across sales, messaging, scheduling, and operations Build and fine-tune LLMs (open-source and API-driven) tailored to HighLevel's unique data and customer use cases Develop robust retrieval-augmented generation (RAG) systems and vector search infrastructure to enable context-rich, real-time generation Design and iterate on prompt engineering, context construction, and agent tool usage strategies using frameworks like LangChain Apply core data science methods — modeling, A/B testing, scoring, clustering, and time-series forecasting — to enhance agent intelligence and broader product features Partner with backend, infra, and product teams to build reusable, scalable GenAI infrastructure: model serving, prompt versioning, logging, evals, and feedback loops Continuously evaluate and monitor agent performance, hallucination rates, and real-world effectiveness using rigorous experimentation frameworks Influence HighLevel's AI roadmap while mentoring engineers and contributing to technical standards and best practices Requirements: 7+ years of experience in Data Science, Machine Learning, or Applied AI, with a track record of delivering production-grade models and systems Hands-on expertise with LLMs: fine-tuning, prompt engineering, function-calling agents, embeddings, and evaluation techniques Strong experience in building retrieval-augmented generation (RAG) systems using vector databases (e.g., FAISS, Pinecone, Weaviate) Experience working in cloud-native environments (GCP, AWS) and deploying models with frameworks like PyTorch, Transformers (HF), and MLOps tools Experience with LangChain or similar agent orchestration frameworks; ability to design multi-step, tool-augmented agents Proficiency in Python, with strong engineering practices (CI/CD, testing, versioning) and familiarity with TypeScript Solid foundation in core data science: supervised and unsupervised learning, causal inference, statistical testing, segmentation, and time-series forecasting Proven experience taking ML/AI solutions from prototype to production, including monitoring, observability, and model iteration Ability to work independently and collaboratively, leading initiatives and mentoring peers in a fast-paced, cross-functional environment Strong product sense and communication skills—able to translate between technical constraints and product goals EEO Statement: At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
Posted 3 weeks ago
3.0 years
0 Lacs
India
On-site
Role: Machine Learning Engineer – Large Language Models Roles And Responsibilities: Design, develop, and deploy large-scale language models for a range of NLP tasks such as text generation, summarization, question answering, and sentiment analysis. Fine-tune pre-trained models (e.g., GPT, BERT, T5) on domain-specific data to optimize performance and accuracy. Collaborate with data engineering teams to collect, preprocess, and curate large datasets for training and evaluation. Experiment with model architectures, hyperparameters, and training techniques to improve model efficiency and performance. Develop and maintain pipelines for model training, evaluation, and deployment in a scalable and reproducible manner. Implement and optimize inference solutions to ensure models are performant in production environments. Monitor and evaluate model performance in production, making improvements as needed. Document methodologies, experiments, and findings to share with stakeholders and other team members. Stay current with advancements in LLMs, NLP, and machine learning, and apply new techniques to existing projects. Collaborate with product managers to understand project requirements and translate them into technical solutions. 3+ years of experience in machine learning and natural language processing. Proven experience working with LLMs (such as GPT, BERT, T5, etc.) in production environments Demonstrated experience fine-tuning and deploying large-scale language models. Technical Skills: Proficiency in Python and experience with ML libraries and frameworks such as PyTorch ,TensorFlow, Hugging Face Transformers, etc. Strong understanding of deep learning architectures (RNNs, CNNs, Transformers) and hands-on experience with Transformer-based architectures. Familiarity with cloud platforms (AWS, GCP, Azure) and experience with containerization tools like Docker and orchestration with Kubernetes. Experience with data preprocessing, feature engineering, and data pipeline development. Knowledge of distributed training techniques and optimization methods for handling large datasets. Soft Skills: Excellent communication and collaboration skills, with an ability to work effectively across interdisciplinary teams. Strong analytical and problem-solving skills, with attention to detail and a passion for continuous learning. Ability to work independently and manage multiple projects in a fast-paced, dynamic environment. Preferred Qualifications: Experience with prompt engineering and techniques to maximize the effectiveness of LLMs in various applications. Knowledge of ethical considerations and bias mitigation techniques in language models. Familiarity with reinforcement learning, especially RLHF (Reinforcement Learning from Human Feedback) Experience with model compression and deployment techniques for resource-constrained environments. Contributions to open-source projects or publications in reputable machine learning journals. Professional development opportunities, including access to conferences, workshops, and training programs. A collaborative, inclusive work culture that values innovation and teamwork Qualifications: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field. Primary skills (Must have): Python PyTorch, TensorFlow, Hugging Face Transformers Familiarity in cloud platforms-AWS, GCP, Azure docker, kubernetes Interview Details: Video screening with HR L1 - Technical Interview L2 - Technical and HR Round Note: Candidate must have own laptop. Must follow Kuwait calendar. Working Hours: 11:30 AM to 7:30 PM. Working days : Sunday to Thursday.
Posted 3 weeks ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Overview We are seeking a highly motivated AI Researcher with a Ph.D. in Mathematics or Physics to join our research team. You will contribute to the development of cutting-edge AI models applied to biological data, collaborating with scientists and engineers to drive discovery and innovation. Key Responsibilities Develop, analyze, and optimize advanced machine learning models, including deep learning and probabilistic models, tailored to biomedical applications. Conduct theoretical and applied research bridging mathematics/physics with biological systems and data. Collaborate with domain experts to frame complex biological challenges into solvable AI problems. Design and implement experiments to validate model performance on real-world biological datasets (e.g., genomics, proteomics, medical imaging). Publish findings in top-tier journals and conferences, and contribute to patents and IP development. Stay current with the latest research in AI, applied mathematics, and computational biology. Qualifications Ph.D. in Mathematics, Physics, or a closely related field. Strong background in one or more of the following: statistical mechanics, optimization, information theory, Bayesian inference, or numerical modeling. Solid programming skills in Python and experience with machine learning libraries (e.g., PyTorch, TensorFlow, JAX). Ability to work with large-scale, high-dimensional datasets. Excellent problem-solving and mathematical modeling skills. Strong communication skills and the ability to work in an interdisciplinary team. Preferred Qualifications Experience with AI applications in biology, chemistry, or healthcare. Familiarity with bioinformatics tools and biological data formats (e.g., FASTA, VCF). Previous publications or projects in applied AI research.
Posted 3 weeks ago
10.0 years
0 Lacs
Greater Kolkata Area
Remote
Job Description Role : Staff Software Engineer – Machine Learning Platform (GenAI) Location : India Remote About The Role Join Coinbase’s Machine Learning Platform team to build robust infrastructure powering fraud detection, blockchain analytics, and personalized user experiences. You’ll work across all layers of the ML platform – from streaming pipelines and distributed training to real-time inference and developer tooling . Key Responsibilities Understand and address the needs of Coinbase’s ML engineers Mentor junior engineers and elevate engineering quality Optimize low-latency streaming pipelines for fresh, high-quality model inputs Maintain high-availability, low-latency ML inference infrastructure (including LLMs) Build tooling to track data quality and model performance degradation Lead efforts in scalable distributed ML training systems What Coinbase Looks For 10+ years of software engineering experience Expertise in distributed systems, high-performance computing, and clean design Strong communication and mentorship skills Ability to work across a diverse tech stack and adapt quickly Passion for developer experience and system reliability Nice to Have Experience building or working directly with ML models or pipelines Prior platform engineering or developer tooling experience Familiarity with: Python, Golang Ray, Spark, Tecton, Airflow, Databricks Snowflake, DynamoDB Work Culture Remote-first, but participation in team/company offsites is expected High-performance, fast-paced, mission-driven environment Emphasis on ownership, transparency, and continuous improvement Tagged As React CSS React Native HTML Development Remote Non Tech Web3 Solidity Developer Marketing NFT DeFi
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France