Jobs
Interviews

15 Imputation Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Pune, Maharashtra, India

On-site

P1,C3,STS Design, build data cleansing and imputation, map to a standard data model, transform to satisfy business rules and statistical computations, and validate data content. Develop, modify, and maintain Python and Unix Scripts, and complex SQL Performance tuning of the existing code and avoid bottlenecks and improve performance Build an end-to-end data flow from sources to entirely curated and enhanced data sets. Develop automated Python jobs for ingesting data from various source systems Provide technical expertise in areas of architecture, design, and implementation. Work with team members to create useful reports and dashboards that provide insight, improve/automate processes, or otherwise add value to the team. Write SQL queries for data validation. Design, develop and maintain ETL processess to extract, transform and load Data from various sources into the data warehours Colloborate with data architects, analysts and other stake holders to understand data requirement and ensure quality Optimize and tune ETL processes for performance and scalaiblity Develop and maintain documentation for ETL processes, data flows, and data mappings Monitor and trouble shoot ETL processes to ensure data accuracy and availability Implement data validation and error handling mechanisms Work with large data sets and ensure data integrity and consistency Skills Python, ETL Tools like Informatica, Talend, SSIS or similar SQL, Mysql, Expertise in Oracle, SQL Server and Teradata DeV Ops, GIT Lab Exp in AWS glue or Azure data factory

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us: Planful is the pioneer of financial performance management cloud software. The Planful platform, which helps businesses drive peak financial performance, is used around the globe to streamline business-wide planning, budgeting, consolidations, reporting, and analytics. Planful empowers finance, accounting, and business users to plan confidently, close faster, and report accurately. More than 1,500 customers, including Bose, Boston Red Sox, Five Guys, Grafton Plc, Gousto, Specialized and Zappos rely on Planful to accelerate cycle times, increase productivity, and improve accuracy. Planful is a private company backed by Vector Capital, a leading global private equity firm. Learn more at planful.com. About the Role: We are looking for self-driven, self-motivated, and passionate technical experts who would love to join us in solving the hardest problems in the EPM space. If you are capable of diving deep into our tech stack to glean through memory allocations, floating point calculations, and data indexing (in addition to many others), come join us. Requirements: 5+ years in a mid-level Python Engineer role, preferably in analytics or fintech. Expert in Python (Flask, Django, pandas, NumPy, SciPy, scikit-learn) with hands-on performance tuning. Familiarity with AI-assisted development tools and IDEs (Cursor, Windsurf) and modern editor integrations (VS Code + Cline). Exposure to libraries supporting time-series forecasting. Proficient in SQL for complex queries on large datasets. Excellent analytical thinking, problem-solving, and communication skills. Nice to have: Shape financial time-series data: outlier detection/handling, missing-value imputation, techniques for small/limited datasets. Profile & optimize Python code (vectorization, multiprocessing, cProfile). Monitor model performance and iterate to improve accuracy. Collaborate with data scientists and stakeholders to integrate solutions. Why Planful Planful exists to enrich the world by helping our customers and our people achieve peak performance. To foster the best in class work we're so proud of, we've created a best in class culture, including: 2 Volunteer days, Birthday PTO, and quarterly company Wellness Days 3 months supply of diapers and meal deliveries for the first month of your Maternity/Paternity leave Annual Planful Palooza, our in-person, company-wide culture Company-wide Mentorship program with Executive sponsorship of CFO and Manager-specific monthly training programs Employee Resource Groups such as Women of Planful, LatinX at Planful, Parents of Planful, and many We encourage our teammates to bring their authentic selves to the team, and have full support in creating new ERGs & communities along the way.

Posted 1 week ago

Apply

0 years

0 Lacs

Gāndhīnagar

On-site

We seek strong candidates in any field of mathematics and statistics as well as in any interdisciplinary areas where innovative and principled use of statistics and/or probability is of vital importance. Candidates must have a Ph.D. in Statistics or Probability theory from a reputed institution, and a good research record and background. The following sub-areas are currently of special interest: Optimization Technique, Mathematical Modeling and Simulation, Probability and Random Processes Causal Inference, High-dimensional Statistics, Statistics of Networks, Bayesian Inference, Bayesian Computation, Missing Data Analysis, Imputation Methodology, Applied Stochastic Processes, Statistical Modelling of Spatio-temporal Data, Analysis of complex observational data Minimum Eligibility Criteria (all disciplines except design area candidates) ( i ) Ph. D. with a first class or equivalent in the preceding degree and an excellent academic record throughout; and ( ii ) A strong research record with publications in reputed journals and conferences. Associate Professor A minimum of six years post-Ph.D. teaching/research/professional experience of which at least three years should be at the level of Assistant Professor at higher educational institutions. A strong research record as evidenced by publications, external research grants /projects, and experience in doctoral supervision is expected. Application Submission Process Prospective candidates should send an email to dean_faculty@daiict.ac.in with Subject as “Faculty position in Disciplines/Areas (e.g. Computer Science, Humanities & Social Sciences)". Please attach the following to your email: (1) CV with details about your education starting 12th standard board exams (mention marks/CGPA, year of passing, specialization if any), work experience, and publications. Please provide names of three references who may be contacted for a letter of reference in support of your candidature. (2) A research statement giving research background, research outcomes, and future research plans. (3) A teaching statement giving teaching methodology, teaching experience, foundation/core courses you would like to teach, and elective courses you would like to teach. Faculty will be responsible for conducting independent research within their respective fields and teaching both undergraduate and postgraduate courses. Candidates with interdisciplinary expertise are strongly encouraged to apply. They will play an important role in contributing to the Institute’s mission through their teaching, research, and participation in various institutional activities. We encourage candidates to visit the Institute website for more information about the courses and research groups, in particular, the Faculty page, to get a sense of the faculty profile .

Posted 1 week ago

Apply

0 years

0 Lacs

Gāndhīnagar

On-site

We seek strong candidates in any field of mathematics and statistics as well as in any interdisciplinary areas where innovative and principled use of statistics and/or probability is of vital importance. Candidates must have a Ph.D. in Statistics or Probability theory from a reputed institution, and a good research record and background. The following sub-areas are currently of special interest: Optimization Technique, Mathematical Modeling and Simulation, Probability and Random Processes Causal Inference, High-dimensional Statistics, Statistics of Networks, Bayesian Inference, Bayesian Computation, Missing Data Analysis, Imputation Methodology, Applied Stochastic Processes, Statistical Modelling of Spatio-temporal Data, Analysis of complex observational data Minimum Eligibility Criteria (all disciplines except design area candidates) ( i ) Ph. D. with a first class or equivalent in the preceding degree and an excellent academic record throughout; and ( ii ) A strong research record with publications in reputed journals and conferences. Assistant Professor Ph.D. with strong research capabilities and a strong passion for teaching at undergraduate and postgraduate levels. Postdoctoral experience is preferred. Application Submission Process Prospective candidates should send an email to dean_faculty@daiict.ac.in with Subject as “Faculty position in Disciplines/Areas (e.g. Computer Science, Humanities & Social Sciences)". Please attach the following to your email: (1) CV with details about your education starting 12th standard board exams (mention marks/CGPA, year of passing, specialization if any), work experience, and publications. Please provide names of three references who may be contacted for a letter of reference in support of your candidature. (2) A research statement giving research background, research outcomes, and future research plans. (3) A teaching statement giving teaching methodology, teaching experience, foundation/core courses you would like to teach, and elective courses you would like to teach. Faculty will be responsible for conducting independent research within their respective fields and teaching both undergraduate and postgraduate courses. Candidates with interdisciplinary expertise are strongly encouraged to apply. They will play an important role in contributing to the Institute’s mission through their teaching, research, and participation in various institutional activities. We encourage candidates to visit the Institute website for more information about the courses and research groups, in particular, the Faculty page, to get a sense of the faculty profile .

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us Planful is the pioneer of financial performance management cloud software. The Planful platform, which helps businesses drive peak financial performance, is used around the globe to streamline business-wide planning, budgeting, consolidations, reporting, and analytics. Planful empowers finance, accounting, and business users to plan confidently, close faster, and report accurately. More than 1,500 customers, including Bose, Boston Red Sox, Five Guys, Grafton Plc, Gousto, Specialized and Zappos rely on Planful to accelerate cycle times, increase productivity, and improve accuracy. Planful is a private company backed by Vector Capital, a leading global private equity firm. Learn more at planful.com. About The Role We are looking for self-driven, self-motivated, and passionate technical experts who would love to join us in solving the hardest problems in the EPM space. If you are capable of diving deep into our tech stack to glean through memory allocations, floating point calculations, and data indexing (in addition to many others), come join us. Requirements 5+ years in a mid-level Python Engineer role, preferably in analytics or fintech. Expert in Python (Flask, Django, pandas, NumPy, SciPy, scikit-learn) with hands-on performance tuning. Familiarity with AI-assisted development tools and IDEs (Cursor, Windsurf) and modern editor integrations (VS Code + Cline). Exposure to libraries supporting time-series forecasting. Proficient in SQL for complex queries on large datasets. Excellent analytical thinking, problem-solving, and communication skills. Nice To Have Shape financial time-series data: outlier detection/handling, missing-value imputation, techniques for small/limited datasets. Profile & optimize Python code (vectorization, multiprocessing, cProfile). Monitor model performance and iterate to improve accuracy. Collaborate with data scientists and stakeholders to integrate solutions. Why Planful Planful Exists To Enrich The World By Helping Our Customers And Our People Achieve Peak Performance. To Foster The Best In Class Work We’re So Proud Of, We’ve Created a Best In Class Culture, Including 2 Volunteer days, Birthday PTO, and quarterly company Wellness Days 3 months supply of diapers and meal deliveries for the first month of your Maternity/Paternity leave Annual Planful Palooza, our in-person, company-wide culture Company-wide Mentorship program with Executive sponsorship of CFO and Manager-specific monthly training programs Employee Resource Groups such as Women of Planful, LatinX at Planful, Parents of Planful, and many We encourage our teammates to bring their authentic selves to the team, and have full support in creating new ERGs & communities along the way.

Posted 2 weeks ago

Apply

1.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Roles And Responsibilities Develop AI solution based on deep learning/ Neural network/ NLP/ computer vision to extract the content from the video/ image Develop and implement artificial intelligence models, primarily focusing on Generative AI. Prepare custom neural network architecture and train deep learning networks Integrate AI solutions with real-time applications and deploy them both on on-premise and cloud (AWS, Azure) Lead the design and development of AI solutions, ensuring their alignment with the company's objectives. Collaborate with cross-functional teams to understand their needs and translate them into AI solutions. Data Imputation & Cleansing through industry best statistical practices Leverage cloud technologies for the development and deployment of AI solutions. Utilize cloud technologies to enhance the efficiency and scalability of AI solutions. Conduct regular testing of AI models to ensure their accuracy and reliability. Troubleshoot and resolve any issues related to AI models or their implementation. Regularly report on project status, challenges, and achievements to senior management. Ensure adherence to industry best practices and compliance with company policies in all AI-related activities. Candidate should have working experience of building AI agents using available frameworks Qualifications Must Have: Hold a Bachelor's degree in Engineering or Technology, preferably with a specialization in a related field. Possess 1-6 years of experience in developing and implementing AI solutions, with a focus on Generative AI. Demonstrate proficiency in deep learning/ Neural network/ NLP/ computer vision /Machine Learning and Generative AI Exhibit a strong understanding of Statistics, ML methodologies, and training of neural networks. Candidate should have working experience of building AI agents using available frameworks Good To Have Be proficient in using cloud technologies for the development and deployment of AI solutions. Have hands-on experience in building deep learning/ Neural network/ NLP/ computer vision /Machine Learning and Generative AI solutions using cloud native services. Be familiar with industry best practices and compliance requirements related to AI. Possess certification in AI or related field. Have the ability to work independently as well as part of a team. Demonstrate strong project management skills, with the ability to define project scope, goals, and deliverables.

Posted 1 month ago

Apply

2.0 - 3.0 years

6 - 12 Lacs

Delhi

On-site

Job Title: Data Scientist – Financial Analytics Location: Onsite – Okhla Vihar, New Delhi Experience: 2-3 Years About the Role: We are seeking a passionate and results-driven Data Scientist to join our analytics team at our Okhla Vihar office. This role is ideal for someone with 2–3 years of hands-on experience in applying data science techniques to financial datasets, with a strong foundation in Python, SQL, and time series forecasting. You will play a key role in analyzing complex financial data, building predictive models, and delivering actionable insights to support critical business decisions. Key Responsibilities: Analyze large volumes of structured and unstructured financial data to extract meaningful insights. Build and evaluate predictive models using machine learning techniques (e.g., Scikit learn). Perform time series analysis and forecasting for financial indicators (e.g., market trends, portfolio performance, cash flows). Design and implement robust feature engineering pipelines to improve model accuracy. Develop risk modeling frameworks to assess financial risk (e.g., credit risk, market risk). Write complex and optimized SQL queries for data extraction and transformation. Leverage Python libraries like Pandas, NumPy, and SciPy for data preprocessing and manipulation. Create clear and insightful data visualizations using Matplotlib, Seaborn, or Plotly to communicate findings. Work closely with finance and strategy teams to translate business needs into data-driven solutions. Monitor and fine-tune models in production to ensure continued relevance and accuracy. Document models, assumptions, and methodologies for auditability and reproducibility. Required Skills & Experience: 2–3 years of experience as a Data Scientist, preferably in a financial services or fintech environment. Proficient in Python (Pandas, Scikit-learn, NumPy, SciPy, etc.). Strong experience in SQL for querying large datasets. Deep understanding of time series modeling (ARIMA, SARIMA, Prophet, etc.). Experience with feature selection, feature transformation, and data imputation techniques. Solid understanding of financial concepts such as ROI, risk/return, volatility, portfolio analysis, and pricing models. Exposure to risk modeling (credit scoring, stress testing, scenario analysis). Strong analytical and problem-solving skills with attention to detail. Experience with data visualization tools – Matplotlib, Seaborn, or Plotly. Ability to interpret model outputs and convey findings to both technical and non technical stakeholders. Excellent communication and collaboration skills. Preferred Qualifications: Bachelor’s or Master’s degree in Data Science, Statistics, Finance, Economics, Computer Science, or a related field. Experience working with financial time series datasets (e.g., stock prices, balance sheets, trading data). Understanding of regulatory frameworks and compliance in financial analytics. Familiarity with cloud platforms (AWS, GCP) is a plus. Experience working in agile teams. What We Offer: Competitive salary and performance incentives Onsite role at a modern office located in Okhla Vihar, New Delhi A collaborative and high-growth work environment Opportunities to work on real-world financial data challenges Exposure to cross-functional teams in finance, technology, and business strategy How to Apply: If you’re excited to work at the intersection of data science and finance, and want to be part of a dynamic team solving real-world financial challenges, we’d love to hear from you. Please send your resume, portfolio (if applicable), and a brief note about why you're a great fit to [your email/HR contact here]. Job Type: Full-time Pay: ₹600,000.00 - ₹1,200,000.00 per year Schedule: Morning shift Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Kalaburagi, Karnataka, India

On-site

Responsibilities Ability to write clean, maintainable, and robust code in Python Understanding and expertise of software engineering concepts and best practices Knowledge of testing frameworks and libraries Experience with analytics (descriptive, predictive, EDA), feature engineer, algorithms, anomaly detection, data quality assessment and python visualization libraries - e.g. matplotlib, seaborn or other Comfortable with notebook and source code development - Jupyter, Pycharm/VScode Hands-on experience of technologies like Python, Spark/Pyspark, Hadoop/MapReduce/HIVE, Pandas etc. Familiarity with query languages and database technologies, CI/CD, testing and validation of data and software Tech stack and activities that you would use and preform on a daily basis : Python Spark (PySpark) Jupyter SQL and No-SQL DBMS Git (as source code versioning and CI/CD) Exploratory Data Analysis (EDA) Imputation Techniques Data Linking / Cleansing Feature Engineering Apache Airflow/ Jenkins scheduling and automation, Github and Github Actions Collaborative - able to build strong relations that enable robust debate, and resolve periodic disagreements regarding priorities. Excellent interpersonal, and communication skills Ability to communicate effectively with technical and non-technical audience Ability to work under pressure with a solid sense for setting priorities Ability to lead technical work with strong sense of ownership Strong command of English language (both verbal and written) Practical and action oriented Compelling communicator Excellent stakeholder management Foster and promote entrepreneurial spirit and curiosity amongst team members Team player Quick learner (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Delhi, Delhi

On-site

Job Title: Data Scientist – Financial Analytics Location: Onsite – Okhla Vihar, New Delhi Experience: 2-3 Years About the Role: We are seeking a passionate and results-driven Data Scientist to join our analytics team at our Okhla Vihar office. This role is ideal for someone with 2–3 years of hands-on experience in applying data science techniques to financial datasets, with a strong foundation in Python, SQL, and time series forecasting. You will play a key role in analyzing complex financial data, building predictive models, and delivering actionable insights to support critical business decisions. Key Responsibilities: Analyze large volumes of structured and unstructured financial data to extract meaningful insights. Build and evaluate predictive models using machine learning techniques (e.g., Scikit learn). Perform time series analysis and forecasting for financial indicators (e.g., market trends, portfolio performance, cash flows). Design and implement robust feature engineering pipelines to improve model accuracy. Develop risk modeling frameworks to assess financial risk (e.g., credit risk, market risk). Write complex and optimized SQL queries for data extraction and transformation. Leverage Python libraries like Pandas, NumPy, and SciPy for data preprocessing and manipulation. Create clear and insightful data visualizations using Matplotlib, Seaborn, or Plotly to communicate findings. Work closely with finance and strategy teams to translate business needs into data-driven solutions. Monitor and fine-tune models in production to ensure continued relevance and accuracy. Document models, assumptions, and methodologies for auditability and reproducibility. Required Skills & Experience: 2–3 years of experience as a Data Scientist, preferably in a financial services or fintech environment. Proficient in Python (Pandas, Scikit-learn, NumPy, SciPy, etc.). Strong experience in SQL for querying large datasets. Deep understanding of time series modeling (ARIMA, SARIMA, Prophet, etc.). Experience with feature selection, feature transformation, and data imputation techniques. Solid understanding of financial concepts such as ROI, risk/return, volatility, portfolio analysis, and pricing models. Exposure to risk modeling (credit scoring, stress testing, scenario analysis). Strong analytical and problem-solving skills with attention to detail. Experience with data visualization tools – Matplotlib, Seaborn, or Plotly. Ability to interpret model outputs and convey findings to both technical and non technical stakeholders. Excellent communication and collaboration skills. Preferred Qualifications: Bachelor’s or Master’s degree in Data Science, Statistics, Finance, Economics, Computer Science, or a related field. Experience working with financial time series datasets (e.g., stock prices, balance sheets, trading data). Understanding of regulatory frameworks and compliance in financial analytics. Familiarity with cloud platforms (AWS, GCP) is a plus. Experience working in agile teams. What We Offer: Competitive salary and performance incentives Onsite role at a modern office located in Okhla Vihar, New Delhi A collaborative and high-growth work environment Opportunities to work on real-world financial data challenges Exposure to cross-functional teams in finance, technology, and business strategy How to Apply: If you’re excited to work at the intersection of data science and finance, and want to be part of a dynamic team solving real-world financial challenges, we’d love to hear from you. Please send your resume, portfolio (if applicable), and a brief note about why you're a great fit to [your email/HR contact here]. Job Type: Full-time Pay: ₹600,000.00 - ₹1,200,000.00 per year Schedule: Morning shift Work Location: In person

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Purpose - Understand Business Processes & Data, Model the requirements to create Analytics Solutions Build Predictive Models & Recommendation Engines using state-of-the-art Machine Learning Techniques to aid Business Processes increase efficiency and effectiveness in their outcomes. Churn and Analyze the data to discover actionable insights & patterns for Business use. Assist the Function Head in Data Preparation & Modelling Tasks as required JobOutline - Collaborate with Business and IT teams for understanding and collecting data. Collect, collate, clean, process and transform large volume(s) of primarily Tabular data (Blend of Numerical, Categorical & some Text). Apply Data Preparation Techniques like Data Filtering, Joining, Cleaning, Missing Value imputation, Feature Extraction, Feature Engineering, Feature Selection, Dimensionality Reduction, Feature Scaling, Variable Transformation etc Apply as required: basic Algorithms like Linear Regression, Logistic Regression, ANOVA, KNN, Clustering (K-Means, Density, Hierarchical etc), SVM, Naïve Bayes, Decision Trees, Principal Components, Association Rule Mining etc. Apply as required: Ensemble Modeling algorithms like Bagging (Random Forest), Boosting (GBM, LGBM, XGBoost, CatBoost), Time-Series Modelling and other state-of-the-art Algorithms. Apply as required: Modelling concepts like Hyperparameter Optimization, Feature Selection, Stacking, Blending, K-Fold Cross-Validation, Bias & Variance, Overfitting etc Build Predictive Models using state-of-the-art Machine Learning techniques for Regression, Classification, Clustering, Recommendation Engines etc Perform Advance Analytics of the Business Data to find hidden patterns & insights, explanatory causes, and make strategic business recommendations based on the same Knowledge /Education BE /B. Tech – Any Stream Skills Should have strong expertise in Python libraries like Pandas & Scikit Learn along with ability to code according to requirements stated in the Job Outline above Experience of Python Editors like PyCharm and/or Jupyter Notebooks (or other editors) is a must. Ability to organize the code into Modules, Functions and/or Objects is a must Knowledge of using ChatGPT for ML will be preferred. Familiarity with basic SQL for Querying & Excel for Data Analysis is a must. Should understand basics of Statistics like Distributions, Hypothesis Testing, Sampling Techniques etc Work Experience Have an experience of at least 4 years of solving Business Problems through Data Analytics, Data Science and Modelling. Should have experience as a full-time Data Scientist for at least 2 years. Experience of at least 3 Projects in ML Model building, which were used in Production by Business or other clients Skills/Experience Preferred but not compulsory - Familiarity with using ChatGPT, LLMs, Out-of-the Box Models etc for Data Preparation & Model building Kaggle experience. Familiarity with R. Job Interface/Relationships: Internal Work with different Business Teams to build Predictive Models for them External None Key Responsibilities and % Time Spent Data Preparation for Modelling - Data Extraction, Cleaning, Joining & Transformation - 35% Build ML/AI Models for various Business Requirements - 35% Perform Custom Analytics for providing actionable insights to the Business - 20% Assist the Function Head in Data Preparation & Modelling Tasks as required - 10% Any other additional Input - Will not be considered for selection: Familiarity with Deep Learning Algorithms Image Processing & Classification Text Modelling using NLP Techniques Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

P1,C3,STS Design, build data cleansing and imputation, map to a standard data model, transform to satisfy business rules and statistical computations, and validate data content. Develop, modify, and maintain Python and Unix Scripts, and complex SQL Performance tuning of the existing code and avoid bottlenecks and improve performance Build an end-to-end data flow from sources to entirely curated and enhanced data sets. Develop automated Python jobs for ingesting data from various source systems Provide technical expertise in areas of architecture, design, and implementation. Work with team members to create useful reports and dashboards that provide insight, improve/automate processes, or otherwise add value to the team. Write SQL queries for data validation. Design, develop and maintain ETL processess to extract, transform and load Data from various sources into the data warehours Colloborate with data architects, analysts and other stake holders to understand data requirement and ensure quality Optimize and tune ETL processes for performance and scalaiblity Develop and maintain documentation for ETL processes, data flows, and data mappings Monitor and trouble shoot ETL processes to ensure data accuracy and availability Implement data validation and error handling mechanisms Work with large data sets and ensure data integrity and consistency Skills Python, ETL Tools like Informatica, Talend, SSIS or similar SQL, Mysql, Expertise in Oracle, SQL Server and Teradata DeV Ops, GIT Lab Exp in AWS glue or Azure data factory Show more Show less

Posted 2 months ago

Apply

1.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Roles and Responsibilities: Develop AI solution based on deep learning/ Neural network/ NLP/ computer vision to extract the content from the video/ image Develop and implement artificial intelligence models, primarily focusing on Generative AI. Prepare custom neural network architecture and train deep learning networks Integrate AI solutions with real-time applications and deploy them both on on-premise and cloud (AWS, Azure) Lead the design and development of AI solutions, ensuring their alignment with the company's objectives. Collaborate with cross-functional teams to understand their needs and translate them into AI solutions. Data Imputation & Cleansing through industry best statistical practices Leverage cloud technologies for the development and deployment of AI solutions. Utilize cloud technologies to enhance the efficiency and scalability of AI solutions. Conduct regular testing of AI models to ensure their accuracy and reliability. Troubleshoot and resolve any issues related to AI models or their implementation. Regularly report on project status, challenges, and achievements to senior management. Ensure adherence to industry best practices and compliance with company policies in all AI-related activities. Candidate should have working experience of building AI agents using available frameworks Qualifications: Must Have: Hold a Bachelor's degree in Engineering or Technology, preferably with a specialization in a related field. Possess 1-6 years of experience in developing and implementing AI solutions, with a focus on Generative AI. Demonstrate proficiency in deep learning/ Neural network/ NLP/ computer vision /Machine Learning and Generative AI Exhibit a strong understanding of Statistics, ML methodologies, and training of neural networks. Candidate should have working experience of building AI agents using available frameworks Good to have: Be proficient in using cloud technologies for the development and deployment of AI solutions. Have hands-on experience in building deep learning/ Neural network/ NLP/ computer vision /Machine Learning and Generative AI solutions using cloud native services. Be familiar with industry best practices and compliance requirements related to AI. Possess certification in AI or related field. Have the ability to work independently as well as part of a team. Demonstrate strong project management skills, with the ability to define project scope, goals, and deliverables. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Roles & Responsibilities Eligibility Minimum Qualifications Bachelor’s degree in computer science or a related field OR master’s degree in statistics, economics, business economics, econometrics, or operations research. 6-8 years of experience in the Analytics/Data Science domain. Proficiency in programming languages such as Python. Experience with Generative AI techniques and tools. Familiarity with ETL methods, data imputation, data cleaning, and outlier handling. Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML services. Knowledge of databases and associated tools such as SQL. Technical Skills – Desirable Expertise in NLP and Generative AI concepts/methods/techniques like — Prompt design/engineering — Retrieval Augmented Generation (RAG), Corrective RAG and Knowledge Graph-based RAG using GPT-4o — Fine-tuning through LORA/QLORA — Multi-agentic frameworks for RAG — Reranker etc. for enhancing the plain-vanilla RAG — Evaluation frameworks like G-Eval etc. Strong understanding of Deep Learning methods and Machine Learning techniques including Ensemble methods, Support Vector Machines, and Natural Language Processing (NLP). Exposure to Big Data technologies like Hadoop, Hive, Spark. Experience with advanced reporting tools such as Tableau, Qlikview, or PowerBI. Specific Responsibilities Requirement Gathering: Translate business requirements into actionable analytical plans in collaboration with the team. Ensure alignment of analytical plans with the customer’s strategic objectives. Data Handling: Identify and leverage appropriate data sources to address business problems. Explore, diagnose, and resolve data discrepancies including ETL tasks, missing values, and outliers. Development and Execution: — Individually deliver projects, proof-of-concept (POC) initiatives from inception to completion. — Contribute to the development and refinement of technical and analytics architecture, ensuring it aligns with project and organizational goals. — Implement scalable and robust analytical frameworks and data pipelines to support advanced analytics and machine learning applications. — Coordinating with cross-functional teams to achieve project goals. — Delivery of production-ready models and solutions, meeting quality and performance standards. — Monitor success metrics to ensure high-quality output and make necessary adjustments. — Create and maintain documentation/reports. Innovation and Best Practices: Stay informed about new trends in Generative AI and integrate relevant advancements into our solutions. Implement novel applications of Generative AI algorithms and techniques in Python. Sample Projects GenAI-powered self-serve analytics solution for a global technology giant, that leverages the power of multi-agent framework and Azure OpenAI services to provide actionable insights, recommendations, and answers to tactical questions derived from web analytics data. GenAI bot for querying on textual documents (e.g., retail audit orientation, FAQ documents, research brief document etc.) of multinational dairy company and, and getting personalized responses in a natural and conversational way, based on the structured context of the user (like their personal details), along with the citations, so that one can effortlessly carry out their first-hand validation themselves GenAI bot for querying on tabular dataset (like monthly KPI data) of leading global event agency to understand, process natural language queries on the data and generate appropriate responses in textual, tabular and visual formats. GenAI-powered advanced information retrieval from structured data of a global technology leading organization TimesFM modelling for advanced time series forecasting for a global retail chain Knowledge-Graph-based GenAI solution for knowledge retrieval and semantic summarization for a leading global event agency GenAI-powered shopping assistant solution for big-box warehouse club retail stores GenAI solution using multi-agentic framework for travel-hospitality use case Input Governance and Response Governance in GenAI Solutions Development and implementation of evaluation frameworks for GenAI solutions/applications Training Foundational Models on new data using open-source LLM or SLM Experience 6-8 Years Skills Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): Data Science, Python (Data Science), GenAI Fundamentals About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu

Work from Office

1. Design, build data cleansing and imputation, map to a standard data model, transform to satisfy business rules and statistical computations, and validate data content. Develop, modify, and maintain Python and Unix Scripts, and complex SQL Performance tuning of the existing code and avoid bottlenecks and improve performance Build an end-to-end data flow from sources to entirely curated and enhanced data sets. Develop automated Python jobs for ingesting data from various source systems Provide technical expertise in areas of architecture, design, and implementation. Work with team members to create useful reports and dashboards that provide insight, improve/automate processes, or otherwise add value to the team. Write SQL queries for data validation. Design, develop and maintain ETL processess to extract, transform and load Data from various sources into the data warehours Colloborate with data architects, analysts and other stake holders to understand data requirement and ensure quality Optimize and tune ETL processes for performance and scalaiblity Develop and maintain documentation for ETL processes, data flows, and data mappings Monitor and trouble shoot ETL processes to ensure data accuracy and availability Implement data validation and error handling mechanisms Work with large data sets and ensure data integrity and consistency skills Python, ETL Tools like Informatica, Talend, SSIS or similar SQL, Mysql, Expertise in Oracle, SQL Server and Teradata DeV Ops, GIT Lab Exp in AWS glue or Azure data factory About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 months ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka

Work from Office

P1,C3,STS 1. Design, build data cleansing and imputation, map to a standard data model, transform to satisfy business rules and statistical computations, and validate data content. Develop, modify, and maintain Python and Unix Scripts, and complex SQL Performance tuning of the existing code and avoid bottlenecks and improve performance Build an end-to-end data flow from sources to entirely curated and enhanced data sets. Develop automated Python jobs for ingesting data from various source systems Provide technical expertise in areas of architecture, design, and implementation. Work with team members to create useful reports and dashboards that provide insight, improve/automate processes, or otherwise add value to the team. Write SQL queries for data validation. Design, develop and maintain ETL processess to extract, transform and load Data from various sources into the data warehours Colloborate with data architects, analysts and other stake holders to understand data requirement and ensure quality Optimize and tune ETL processes for performance and scalaiblity Develop and maintain documentation for ETL processes, data flows, and data mappings Monitor and trouble shoot ETL processes to ensure data accuracy and availability Implement data validation and error handling mechanisms Work with large data sets and ensure data integrity and consistency skills Python, ETL Tools like Informatica, Talend, SSIS or similar SQL, Mysql, Expertise in Oracle, SQL Server and Teradata DeV Ops, GIT Lab Exp in AWS glue or Azure data factory About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies