Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Purpose - Understand Business Processes & Data, Model the requirements to create Analytics Solutions Build Predictive Models & Recommendation Engines using state-of-the-art Machine Learning Techniques to aid Business Processes increase efficiency and effectiveness in their outcomes. Churn and Analyze the data to discover actionable insights & patterns for Business use. Assist the Function Head in Data Preparation & Modelling Tasks as required JobOutline - Collaborate with Business and IT teams for understanding and collecting data. Collect, collate, clean, process and transform large volume(s) of primarily Tabular data (Blend of Numerical, Categorical & some Text). Apply Data Preparation Techniques like Data Filtering, Joining, Cleaning, Missing Value imputation, Feature Extraction, Feature Engineering, Feature Selection, Dimensionality Reduction, Feature Scaling, Variable Transformation etc Apply as required: basic Algorithms like Linear Regression, Logistic Regression, ANOVA, KNN, Clustering (K-Means, Density, Hierarchical etc), SVM, Naïve Bayes, Decision Trees, Principal Components, Association Rule Mining etc. Apply as required: Ensemble Modeling algorithms like Bagging (Random Forest), Boosting (GBM, LGBM, XGBoost, CatBoost), Time-Series Modelling and other state-of-the-art Algorithms. Apply as required: Modelling concepts like Hyperparameter Optimization, Feature Selection, Stacking, Blending, K-Fold Cross-Validation, Bias & Variance, Overfitting etc Build Predictive Models using state-of-the-art Machine Learning techniques for Regression, Classification, Clustering, Recommendation Engines etc Perform Advance Analytics of the Business Data to find hidden patterns & insights, explanatory causes, and make strategic business recommendations based on the same Knowledge /Education BE /B. Tech – Any Stream Skills Should have strong expertise in Python libraries like Pandas & Scikit Learn along with ability to code according to requirements stated in the Job Outline above Experience of Python Editors like PyCharm and/or Jupyter Notebooks (or other editors) is a must. Ability to organize the code into Modules, Functions and/or Objects is a must Knowledge of using ChatGPT for ML will be preferred. Familiarity with basic SQL for Querying & Excel for Data Analysis is a must. Should understand basics of Statistics like Distributions, Hypothesis Testing, Sampling Techniques etc Work Experience Have an experience of at least 4 years of solving Business Problems through Data Analytics, Data Science and Modelling. Should have experience as a full-time Data Scientist for at least 2 years. Experience of at least 3 Projects in ML Model building, which were used in Production by Business or other clients Skills/Experience Preferred but not compulsory - Familiarity with using ChatGPT, LLMs, Out-of-the Box Models etc for Data Preparation & Model building Kaggle experience. Familiarity with R. Job Interface/Relationships: Internal Work with different Business Teams to build Predictive Models for them External None Key Responsibilities and % Time Spent Data Preparation for Modelling - Data Extraction, Cleaning, Joining & Transformation - 35% Build ML/AI Models for various Business Requirements - 35% Perform Custom Analytics for providing actionable insights to the Business - 20% Assist the Function Head in Data Preparation & Modelling Tasks as required - 10% Any other additional Input - Will not be considered for selection: Familiarity with Deep Learning Algorithms Image Processing & Classification Text Modelling using NLP Techniques Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
P1,C3,STS Design, build data cleansing and imputation, map to a standard data model, transform to satisfy business rules and statistical computations, and validate data content. Develop, modify, and maintain Python and Unix Scripts, and complex SQL Performance tuning of the existing code and avoid bottlenecks and improve performance Build an end-to-end data flow from sources to entirely curated and enhanced data sets. Develop automated Python jobs for ingesting data from various source systems Provide technical expertise in areas of architecture, design, and implementation. Work with team members to create useful reports and dashboards that provide insight, improve/automate processes, or otherwise add value to the team. Write SQL queries for data validation. Design, develop and maintain ETL processess to extract, transform and load Data from various sources into the data warehours Colloborate with data architects, analysts and other stake holders to understand data requirement and ensure quality Optimize and tune ETL processes for performance and scalaiblity Develop and maintain documentation for ETL processes, data flows, and data mappings Monitor and trouble shoot ETL processes to ensure data accuracy and availability Implement data validation and error handling mechanisms Work with large data sets and ensure data integrity and consistency Skills Python, ETL Tools like Informatica, Talend, SSIS or similar SQL, Mysql, Expertise in Oracle, SQL Server and Teradata DeV Ops, GIT Lab Exp in AWS glue or Azure data factory Show more Show less
Posted 2 weeks ago
1.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Roles and Responsibilities: Develop AI solution based on deep learning/ Neural network/ NLP/ computer vision to extract the content from the video/ image Develop and implement artificial intelligence models, primarily focusing on Generative AI. Prepare custom neural network architecture and train deep learning networks Integrate AI solutions with real-time applications and deploy them both on on-premise and cloud (AWS, Azure) Lead the design and development of AI solutions, ensuring their alignment with the company's objectives. Collaborate with cross-functional teams to understand their needs and translate them into AI solutions. Data Imputation & Cleansing through industry best statistical practices Leverage cloud technologies for the development and deployment of AI solutions. Utilize cloud technologies to enhance the efficiency and scalability of AI solutions. Conduct regular testing of AI models to ensure their accuracy and reliability. Troubleshoot and resolve any issues related to AI models or their implementation. Regularly report on project status, challenges, and achievements to senior management. Ensure adherence to industry best practices and compliance with company policies in all AI-related activities. Candidate should have working experience of building AI agents using available frameworks Qualifications: Must Have: Hold a Bachelor's degree in Engineering or Technology, preferably with a specialization in a related field. Possess 1-6 years of experience in developing and implementing AI solutions, with a focus on Generative AI. Demonstrate proficiency in deep learning/ Neural network/ NLP/ computer vision /Machine Learning and Generative AI Exhibit a strong understanding of Statistics, ML methodologies, and training of neural networks. Candidate should have working experience of building AI agents using available frameworks Good to have: Be proficient in using cloud technologies for the development and deployment of AI solutions. Have hands-on experience in building deep learning/ Neural network/ NLP/ computer vision /Machine Learning and Generative AI solutions using cloud native services. Be familiar with industry best practices and compliance requirements related to AI. Possess certification in AI or related field. Have the ability to work independently as well as part of a team. Demonstrate strong project management skills, with the ability to define project scope, goals, and deliverables. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Roles & Responsibilities Eligibility Minimum Qualifications Bachelor’s degree in computer science or a related field OR master’s degree in statistics, economics, business economics, econometrics, or operations research. 6-8 years of experience in the Analytics/Data Science domain. Proficiency in programming languages such as Python. Experience with Generative AI techniques and tools. Familiarity with ETL methods, data imputation, data cleaning, and outlier handling. Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML services. Knowledge of databases and associated tools such as SQL. Technical Skills – Desirable Expertise in NLP and Generative AI concepts/methods/techniques like — Prompt design/engineering — Retrieval Augmented Generation (RAG), Corrective RAG and Knowledge Graph-based RAG using GPT-4o — Fine-tuning through LORA/QLORA — Multi-agentic frameworks for RAG — Reranker etc. for enhancing the plain-vanilla RAG — Evaluation frameworks like G-Eval etc. Strong understanding of Deep Learning methods and Machine Learning techniques including Ensemble methods, Support Vector Machines, and Natural Language Processing (NLP). Exposure to Big Data technologies like Hadoop, Hive, Spark. Experience with advanced reporting tools such as Tableau, Qlikview, or PowerBI. Specific Responsibilities Requirement Gathering: Translate business requirements into actionable analytical plans in collaboration with the team. Ensure alignment of analytical plans with the customer’s strategic objectives. Data Handling: Identify and leverage appropriate data sources to address business problems. Explore, diagnose, and resolve data discrepancies including ETL tasks, missing values, and outliers. Development and Execution: — Individually deliver projects, proof-of-concept (POC) initiatives from inception to completion. — Contribute to the development and refinement of technical and analytics architecture, ensuring it aligns with project and organizational goals. — Implement scalable and robust analytical frameworks and data pipelines to support advanced analytics and machine learning applications. — Coordinating with cross-functional teams to achieve project goals. — Delivery of production-ready models and solutions, meeting quality and performance standards. — Monitor success metrics to ensure high-quality output and make necessary adjustments. — Create and maintain documentation/reports. Innovation and Best Practices: Stay informed about new trends in Generative AI and integrate relevant advancements into our solutions. Implement novel applications of Generative AI algorithms and techniques in Python. Sample Projects GenAI-powered self-serve analytics solution for a global technology giant, that leverages the power of multi-agent framework and Azure OpenAI services to provide actionable insights, recommendations, and answers to tactical questions derived from web analytics data. GenAI bot for querying on textual documents (e.g., retail audit orientation, FAQ documents, research brief document etc.) of multinational dairy company and, and getting personalized responses in a natural and conversational way, based on the structured context of the user (like their personal details), along with the citations, so that one can effortlessly carry out their first-hand validation themselves GenAI bot for querying on tabular dataset (like monthly KPI data) of leading global event agency to understand, process natural language queries on the data and generate appropriate responses in textual, tabular and visual formats. GenAI-powered advanced information retrieval from structured data of a global technology leading organization TimesFM modelling for advanced time series forecasting for a global retail chain Knowledge-Graph-based GenAI solution for knowledge retrieval and semantic summarization for a leading global event agency GenAI-powered shopping assistant solution for big-box warehouse club retail stores GenAI solution using multi-agentic framework for travel-hospitality use case Input Governance and Response Governance in GenAI Solutions Development and implementation of evaluation frameworks for GenAI solutions/applications Training Foundational Models on new data using open-source LLM or SLM Experience 6-8 Years Skills Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): Data Science, Python (Data Science), GenAI Fundamentals About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu
Work from Office
1. Design, build data cleansing and imputation, map to a standard data model, transform to satisfy business rules and statistical computations, and validate data content. Develop, modify, and maintain Python and Unix Scripts, and complex SQL Performance tuning of the existing code and avoid bottlenecks and improve performance Build an end-to-end data flow from sources to entirely curated and enhanced data sets. Develop automated Python jobs for ingesting data from various source systems Provide technical expertise in areas of architecture, design, and implementation. Work with team members to create useful reports and dashboards that provide insight, improve/automate processes, or otherwise add value to the team. Write SQL queries for data validation. Design, develop and maintain ETL processess to extract, transform and load Data from various sources into the data warehours Colloborate with data architects, analysts and other stake holders to understand data requirement and ensure quality Optimize and tune ETL processes for performance and scalaiblity Develop and maintain documentation for ETL processes, data flows, and data mappings Monitor and trouble shoot ETL processes to ensure data accuracy and availability Implement data validation and error handling mechanisms Work with large data sets and ensure data integrity and consistency skills Python, ETL Tools like Informatica, Talend, SSIS or similar SQL, Mysql, Expertise in Oracle, SQL Server and Teradata DeV Ops, GIT Lab Exp in AWS glue or Azure data factory About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka
Work from Office
P1,C3,STS 1. Design, build data cleansing and imputation, map to a standard data model, transform to satisfy business rules and statistical computations, and validate data content. Develop, modify, and maintain Python and Unix Scripts, and complex SQL Performance tuning of the existing code and avoid bottlenecks and improve performance Build an end-to-end data flow from sources to entirely curated and enhanced data sets. Develop automated Python jobs for ingesting data from various source systems Provide technical expertise in areas of architecture, design, and implementation. Work with team members to create useful reports and dashboards that provide insight, improve/automate processes, or otherwise add value to the team. Write SQL queries for data validation. Design, develop and maintain ETL processess to extract, transform and load Data from various sources into the data warehours Colloborate with data architects, analysts and other stake holders to understand data requirement and ensure quality Optimize and tune ETL processes for performance and scalaiblity Develop and maintain documentation for ETL processes, data flows, and data mappings Monitor and trouble shoot ETL processes to ensure data accuracy and availability Implement data validation and error handling mechanisms Work with large data sets and ensure data integrity and consistency skills Python, ETL Tools like Informatica, Talend, SSIS or similar SQL, Mysql, Expertise in Oracle, SQL Server and Teradata DeV Ops, GIT Lab Exp in AWS glue or Azure data factory About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2