Knowledge Foundry is a technology company that specializes in providing innovative solutions for data analytics and knowledge management. They focus on leveraging artificial intelligence and machine learning to enhance decision-making processes for businesses.
Not specified
INR 18.0 - 33.0 Lacs P.A.
Work from Office
Full Time
Job DescriptionAI Architect with at least 5 years of experience designing and deploying enterprise-grade AI/ML solutions, specializing in Databricks, MosaicML, and Large Language Models (LLMs). Expertise in building scalable AI architectures that integrate deep learning, natural language processing (NLP), and MLOps best practices to enable real-time decision-making and predictive insights. Proven ability to collaborate with cross-functional teams to operationalize AI models while ensuring data security, scalability, and performance in cloud environments.Key Qualifications AI/ML Architecture & Deployment: Extensive experience in designing and deploying end-to-end AI pipelines using Databricks and MosaicML. Expertise in model training, fine-tuning, and inferencing of LLMs, ensuring optimal performance and cost-efficiency in production environments. Databricks Expertise: Strong hands-on experience with Databricks for data processing, model training, and model serving. Proficient in using Databricks Workflows, MLflow, Delta Lake, and advanced SQL/Scala/PySpark for building scalable AI pipelines. MosaicML Proficiency: Proven expertise in utilizing MosaicML for fine-tuning and training foundation models, leveraging model compression and distributed training techniques to reduce latency and enhance model performance. LLM & NLP Expertise: In-depth knowledge of Large Language Models (LLMs), including OpenAI, GPT models, BERT, LLaMA, and custom fine-tuning techniques. Experienced in building conversational AI, semantic search, text classification, and other NLP-based applications. Model Optimization & MLOps: Experience in implementing MLOps pipelines with tools like MLflow, Kubeflow, and Airflow to automate model training, versioning, deployment, and monitoring. Skilled in optimizing model inference with techniques such as quantization, pruning, and distributed inferencing. Data Pipeline & Feature Engineering: Expert in designing feature stores and scalable data pipelines using Apache Spark, Databricks Delta, and AWS Glue to facilitate efficient feature generation, model retraining, and data versioning. Cloud AI/ML Infrastructure: Proficiency in AWS, Azure, and GCP with expertise in setting up AI environments using EC2, S3, EKS, Lambda, and SageMaker. Ability to design fault-tolerant and highly available AI/ML infrastructure on cloud platforms. AI Governance & Security: Experience in implementing AI governance frameworks, ensuring compliance with AI ethics, bias mitigation, and explainability. Knowledge of data privacy, GDPR, and model interpretability in enterprise environments.Preferred ExperienceExperience with RAG (Retrieval Augmented Generation) pipelines and semantic searchFamiliarity with vector databases such as Pinecone, FAISS, and WeaviateExposure to LangChain, Hugging Face, and OpenAI APIsExperience in building custom AI/LLM solutions tailored to enterprise use casesTechnical Skills AI/ML Platforms: Databricks, MosaicML, SageMaker, Azure ML, GCP AI Programming & Frameworks: Python, PyTorch, TensorFlow, Hugging Face, PySpark, SQL MLOps & Orchestration: MLflow, Kubeflow, Airflow, Docker, Kubernetes, CI/CD LLM & NLP Models: GPT-3/4, BERT, LLaMA, Falcon, BLOOM, T5, LangChain Databricks Tools: Databricks SQL, Delta Lake, AutoML, Unity Catalog Cloud Platforms: AWS (S3, EC2, Lambda), Azure, GCP Version Control & Automation: Git, Jenkins, Terraform, CloudFormationCertifications (Preferred but not mandatory)Databricks Certified Professional Data Engineer/AI EngineerMosaicML Certification in Model Training & OptimizationAWS Certified Machine Learning SpecialtyMicrosoft Certified: Azure AI Engineer Associate
Not specified
INR 18.0 - 33.0 Lacs P.A.
Work from Office
Full Time
Job Description:Senior Data Engineer/Architect with at least 6 years of experience in designing, developing, and optimizing data pipelines, data lakes, and cloud-based data architectures. Skilled in implementing scalable data solutions using Databricks SQL and AWS services, ensuring data quality, security, and performance. Proven ability to collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to deliver high-impact data solutions that drive business insights and operational excellence.Key Qualifications Cloud Data Architecture & Engineering:Expertise in designing and implementing cloud-based data architectures using AWS services such as S3, Glue, Redshift, Athena, Lambda, and EC2. Experience in setting up data lakes, data warehouses, and ETL pipelines optimized for performance and cost efficiency. Databricks Expertise:Strong proficiency in using Databricks SQL for data processing, transformation, and analysis. Skilled in developing and optimizing Spark-based ETL jobs and ensuring seamless integration of Databricks with AWS cloud services. Data Pipeline Development:Experience in building and maintaining scalable and fault-tolerant data pipelines using tools like Apache Spark, Airflow, and AWS Glue. Ability to ingest, transform, and aggregate large volumes of structured and unstructured data efficiently. SQL & Data Modeling:Expertise in SQL programming for data extraction, transformation, and loading (ETL). Experienced in designing and optimizing data models, including dimensional modeling, star schema, and OLAP solutions to enhance query performance. Data Governance & Security:Proficient in implementing data governance frameworks, managing data quality, ensuring compliance with data privacy regulations, and configuring IAM roles, policies, and VPCs to protect sensitive data in AWS environments. Collaboration & Stakeholder Management:Skilled at partnering with business teams, data analysts, and data scientists to gather requirements, translate them into scalable data solutions, and continuously optimize data workflows to meet evolving business needs. Performance Optimization:Proven ability to optimize ETL pipelines and SQL queries, ensuring efficient data processing and reduced latency. Expertise in implementing partitioning, indexing, caching, and other optimization techniques in AWS and Databricks environments.Technical Skills Cloud & Data Platforms: AWS (S3, Glue, Redshift, Lambda, Athena, EMR), Databricks, Apache Spark SQL & Scripting: Databricks SQL, Python, PySpark, SQL, Scala Data Engineering Tools: Apache Airflow, AWS Glue, Delta Lake Data Modeling: Star Schema, Snowflake Schema, Dimensional Modeling Security & Governance: IAM, VPCs, Encryption, Data Privacy Regulations CI/CD & Automation: Terraform, AWS CloudFormation, Git, JenkinsCertifications (Preferred but not mandatory)AWS Certified Data Analytics SpecialtyDatabricks Certified Data Engineer ProfessionalAWS Certified Solutions Architect Associate
Not specified
INR 17.0 - 32.0 Lacs P.A.
Work from Office
Full Time
Knowledge Foundry is looking for an experienced Data Visualization Specialist to join our team in a full-time permanent role. The ideal candidate will be responsible for designing, developing, and optimizing interactive dashboards and reports using Power BI, Spotfire, and Power Apps while working with large and complex pharma forecast data.The role requires expertise in SQL, Python, and Excel for data analysis and manipulation, along with a strong understanding of pharma forecasting metrics such as IQVIA MIDAS, KANTAR, and GLOBOCAN. The candidate should also be familiar with clinical trial data systems and concepts like incidence, prevalence, epidemiology metrics, dosage, and line of therapy. Key Responsibilities:Design, develop, and maintain interactive dashboards using Power BI, Spotfire, and Power AppsPerform data manipulation, transformation, and analysis using SQL, Python, and ExcelWork with relational databases, data lakes, and clinical trial data systems for efficient data extraction and visualizationCollaborate with business stakeholders to understand requirements and deliver insightful, high-impact reportsOptimize dashboards and queries for performance, scalability, and efficiencyInterpret and work with pharma forecast data sources, including IQVIA MIDAS, KANTAR, and GLOBOCANApply forecasting funnel concepts such as incidence, prevalence, epidemiology metrics, units, patients, dosage, and line of therapyEnsure data accuracy, consistency, and integrity across all reporting solutionsRequired Skills & Qualifications:Bachelors/Masters degree in Computer Science, Data Analytics, or a related field5-7 years of experience in leading report development and dashboard designExpertise in: Power BI, Spotfire, Power AppsData Analysis & Manipulation: SQL, Excel, PythonData Sources: Relational databases, Data Lakes, Clinical Trial Data SystemsPharma Forecast Data Exposure Preferred: IQVIA MIDAS, KANTAR, GLOBOCANStrong understanding of forecasting funnel concepts (Incidence, Prevalence, Epidemiology Metrics, Patients, Dosage, Line of Therapy, etc.)
Not specified
INR 17.0 - 32.0 Lacs P.A.
Work from Office
Full Time
Knowledge Foundry is looking for an experienced Frontend Developer - Specialist to join our team as a full-time permanent employee. The ideal candidate will have a strong background in developing web applications, with expertise in JavaScript (ES6+), React, HTML5, CSS3 (Sass, Bootstrap), and data visualization tools like Python Dash, Plotly, R, and Power BI.We seek a professional who can translate complex data into visually appealing and interactive dashboards, work with UI/UX concepts and design tools like Figma, and optimize applications for performance, scalability, and responsiveness.Key Responsibilities:Develop, optimize, and maintain high-performance web applications using React, JavaScript (ES6+), HTML5, and CSS3 (Sass, Bootstrap)Implement interactive data visualization using Python Dash, Plotly, R, and Power BICollaborate with UI/UX designers to build intuitive, user-friendly interfaces based on Figma designsOptimize web applications for speed, responsiveness, and scalabilityWork with version control systems like Git and GitHub to manage codebase efficientlyDebug and troubleshoot front-end performance issuesStay updated with the latest front-end technologies and best practices Required Skills & Qualifications:Bachelors/Master’s degree in Computer Science, Software Engineering, or a related field5-7 years of experience in developing web applications using React and JavaScript (ES6+)Data Visualization: Python Dash, Plotly, R, Power BIUI/UX & Design Tools: Figma, CSS3 (Sass, Bootstrap)Version Control: Git, GitHubExperience in optimizing front-end performance and scalability
Not specified
INR 14.0 - 24.0 Lacs P.A.
Work from Office
Full Time
We are looking for a highly skilled Machine Learning Specialist with deep expertise in R programming and data analysis to join our analytics team. The ideal candidate will have strong experience in machine learning, data manipulation, visualization, and working with large datasets. You will be instrumental in building scalable ML models, generating insights, and developing impactful dashboards for business intelligence. Roles and Responsibilities:Design, develop, and deploy machine learning models including supervised, unsupervised learning, deep learning, and time series analysis.Expertly code in R, leveraging advanced ML and data manipulation libraries such as caret, tensorflow, torch, dplyr, etc.Handle large-scale datasets efficiently using SQL and work in distributed computing environments.Create high-quality data visualizations and dashboards using Power BI, Tableau, ggplot2, Plotly, gt, and flextable.Integrate data workflows with RESTful APIs for data retrieval and deployment.Collaborate with data engineers, product owners, and business analysts to convert business problems into analytical solutions. Key Skills Required: Strong R programming with hands-on use of ML and data librariesAdvanced understanding of machine learning techniques and deep learningProficient in SQL, especially with large-scale and distributed data systemsVisualization expertise using Power BI, Tableau, and R graphics librariesExperience with RESTful APIsSolid grasp of data preprocessing, feature engineering, and model evaluation
Not specified
INR 20.0 - 32.5 Lacs P.A.
Work from Office
Full Time
Job Description:We are seeking a highly analytical Demand Planning Automation Analyst with expertise in demand planning, supply chain management, and statistical forecasting. The ideal candidate will leverage advanced tools such as R, VBA, and Microsoft Power BI to automate demand forecasting processes, improve accuracy, and enhance operational efficiency. This role requires collaboration with global and regional stakeholders to drive strategic improvements in demand planning.Key Responsibilities:Functional Expertise in Demand and Supply Planning:Manage demand and supply planning processes to optimize forecasting accuracy and supply chain efficiency.Apply industry best practices in demand forecasting, inventory management, and supply chain optimization.Technical Proficiency & Automation:Develop automation solutions using VBA and R to streamline forecasting and reporting.Utilize Microsoft Power BI for real-time data visualization, reporting, and performance tracking.Leverage Microsoft Office applications for data analysis and stakeholder presentations.Statistical Forecasting & Kinaxis RapidResponse:Generate monthly forecasts, transitioning from statistical forecasts to consensus-based demand forecasts.Utilize Kinaxis RapidResponse or similar tools for enhanced supply chain planning (preferred).Collaboration with Stakeholders:Engage with regional and global stakeholders to refine demand forecasting techniques based on market trends.Work closely with marketing, trade, and finance teams to estimate promotional volumes for various products.Participation in Monthly Planning Forums:Collaborate with supply, deployment, and innovation planning teams to align production plans.Assist in reducing Mean Absolute Percentage Error (MAPE) and forecasting bias through continuous improvement initiatives.Performance Review & Inventory Management:Monitor forecast accuracy metrics, identifying areas for enhancement.Collaborate with country teams on inventory management, optimizing Reorder Point (ROP) and Safety Stock (SS) levels.Ensure proper inventory control for discontinued products using verified patient data and demand forecasting insights.Required Technical Skills:Tools & Technologies: R, VBA, Microsoft Power BI, Microsoft OfficeStatistical Techniques: Time series analysis, regression analysis, demand forecasting methodologiesSupply Chain Systems: Kinaxis RapidResponse (preferred)Data Analysis & Reporting: Strong ability to analyze, manipulate, and visualize data for strategic decision-making
Not specified
INR 18.0 - 33.0 Lacs P.A.
Work from Office
Full Time
Not specified
INR 15.0 - 30.0 Lacs P.A.
Work from Office
Full Time
FIND ON MAP
Reviews
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Chrome Extension