Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
1 - 3 Lacs
Surat
On-site
Experience : 2 + Years Key Responsibilities: Design, develop, and deploy Machine Learning /Artificial Intelligent models to solve real-world problems in our software products. Collaborate with product managers, developers, and data engineers to define AI project goals and requirements. Clean, process, and analyze large datasets to extract meaningful patterns and insights. Implement and fine-tune models using frameworks such as TensorFlow, PyTorch, or Scikit-learn. Develop APIs and services to integrate AI models with production environments. Monitor model performance and retrain as needed to maintain accuracy and efficiency. Stay updated with the latest advancements in AI/ML and evaluate their applicability to our projects. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Strong understanding of machine learning algorithms (supervised, unsupervised, reinforcement learning). Experience with Python and ML libraries like NumPy, pandas, TensorFlow, Keras, PyTorch, Scikit-learn. Familiarity with NLP, computer vision, or time-series analysis is a plus. Experience with model deployment tools and cloud platforms (AWS/GCP/Azure) preferred. Knowledge of software engineering practices including version control (Git), testing, and CI/CD. Qualification: Prior experience working in a product-based or tech-driven startup environment. Exposure to deep learning, recommendation systems, or predictive analytics. Understanding of ethical AI practices and model interpretability Job Type: Full-time Pay: ₹12,059.95 - ₹30,307.34 per month Schedule: Day shift Work Location: In person
Posted 3 days ago
0 years
0 Lacs
Vadodara
On-site
Join us to build cutting-edge apps with Angular & PHP. Grow, innovate, and code with a passionate team. AI/ML Intern / Fresher – Internship + Job Opportunity Company: Logical Wings Infoweb Pvt. Ltd. Location: Vadodara (On-site only) Type: Internship with Pre-placement Offer Opportunity Domain: Enterprise Software Solutions | AI-Powered Applications Role Overview: We are seeking AI/ML Interns or Freshers who are passionate about Artificial Intelligence and Machine Learning, and excited to apply their knowledge to real-world enterprise problems. You’ll gain hands-on experience, work on live projects, and have a pathway to a full-time role based on your performance. What You’ll Do: Assist in designing and developing machine learning models and AI-based solutions Work on data preprocessing, feature engineering, and model training/evaluation Collaborate with development teams to integrate AI modules into enterprise software Research and experiment with new algorithms and frameworks Help build tools for data visualization, analysis, and insights Skills & Qualifications: Solid understanding of Python and key libraries (NumPy, Pandas, Scikit-learn, etc.) Exposure to Machine Learning and Deep Learning concepts Familiarity with frameworks like flask TensorFlow, Keras, or PyTorch is a plus Addition or plus skills Work or knowledge in web related python with Django framework, Mysql Basic understanding of data structures and algorithms Curiosity, problem-solving mindset, and a willingness to learn Eligibility: Final semester students pursuing a degree in Computer Science / Data Science / Engineering OR Recent graduates with a background or strong interest in AI/ML/For Vadodara only Why Join Us? Work on cutting-edge AI solutions for enterprise clients Mentorship from experienced AI professionals Opportunity to convert to a full-time role post-internship A collaborative and innovation-driven environment How to Apply: Send your resume to: Hr@logicalwings.com Visit us at: www.logicalwings.com Note: Applications via phone calls will not be entertained. If you’re driven by data, algorithms, and the idea of solving real-world problems through AI — Logical Wings is your launchpad!
Posted 3 days ago
0 years
0 Lacs
India
Remote
Data Science Intern Location: Remote Duration: 2-6 months Type: Unpaid Internship About Collegepur Collegepur is an innovative platform dedicated to providing students with comprehensive information about colleges, career opportunities, and educational resources. We are building a dynamic team of talented individuals passionate about data-driven decision-making. Job Summary We are seeking a highly motivated Data Science Intern to join our team. This role involves working on data collection, web scraping, analysis, visualization, and machine learning to derive meaningful insights that enhance our platform’s functionality and user experience. Responsibilities: Web Scraping: Collect and extract data from websites using tools like BeautifulSoup, Scrapy, or Selenium. Data Preprocessing: Clean, transform, and structure raw data for analysis. Exploratory Data Analysis (EDA): Identify trends and insights from collected data. Machine Learning: Develop predictive models for data-driven decision-making. Data Visualization: Create dashboards and reports using tools like Matplotlib, Seaborn, Power BI, or Tableau. Database Management: Work with structured and unstructured data, ensuring quality and consistency. Collaboration: Work with cross-functional teams to integrate data solutions into our platform. Documentation: Maintain records of methodologies, findings, and workflows. Requirements: Currently pursuing or recently completed a degree in Data Science, Computer Science, Statistics, Mathematics, or a related field . Experience in web scraping using BeautifulSoup, Scrapy, or Selenium. Proficiency in Python/R and libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Familiarity with SQL and database management. Strong understanding of data visualization tools . Knowledge of APIs and cloud platforms (AWS, GCP, or Azure) is a plus. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Perks and Benefits: Remote work with flexible hours . Certificate of completion and Letter of Recommendation . Performance-based LinkedIn recommendations . Opportunity to work on real-world projects and enhance your portfolio . If you are passionate about data science and web scraping and eager to gain hands-on experience, we encourage you to apply! (recruitment@collegepur.com)
Posted 3 days ago
0 years
0 Lacs
Rajkot, Gujarat, India
On-site
Are you passionate about Artificial Intelligence and Machine Learning? Start your AI/ML career with hands-on learning, real projects, and expert guidance at TechXperts! Skills Required : Technical Knowledge: • Basic understanding of Python and popular ML libraries like scikit-learn, pandas, NumPy • Familiarity with Machine Learning algorithms (Regression, Classification, Clustering, etc.) • Knowledge of Data Preprocessing, Model Training, and Evaluation Techniques • Understanding of AI concepts such as Deep Learning, Computer Vision, or NLP is a plus • Familiarity with tools like Jupyter Notebook, Google Colab, or TensorFlow/Keras is an advantage Soft Skills : • Curiosity to explore and learn new AI/ML techniques • Good problem-solving and analytical thinking • Ability to work independently and in a team • Clear communication and documentation skills What You’ll Do: • Assist in building and training machine learning models • Support data collection, cleaning, and preprocessing activities • Work on AI-driven features in real-time applications • Collaborate with senior developers to implement ML algorithms • Research and experiment with AI tools and frameworks Why Join TechXperts? • Learn by working on live AI/ML projects • Supportive mentorship from experienced developers • Exposure to the latest tools and techniques • Friendly work culture and growth opportunities
Posted 3 days ago
12.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Description Automation Title Data Architect Type of Employment Permanent Overall Years Of Experience 12-15 years Relevant Years Of Experience 10+ Data Architect Data Architect is responsible for designing and implementing data architecture for multiple projects and also build strategies for data governance Position Summary 12 – 15 yrs of experience in a similar profile with strong service delivery background Experience as a Data Architect with a focus on Spark and Data Lake technologies. Experience in Azure Synapse Analytics Proficiency in Apache Spark for large-scale data processing. Expertise in Databricks, Delta Lake, Azure data factory, and other cloud-based data services. Strong understanding of data modeling, ETL processes, and data warehousing principles. Implement a data governance framework with Unity Catalog . Knowledge in designing scalable streaming data pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Hands on Experience in python and relevant libraries such as pyspark, numpy etc Knowledge of Machine Learning pipelines, GenAI, LLM will be plus Excellent analytical, problem-solving, and technical leadership skills. Experience in integration with business intelligence tools such as Power BI Effective communication and collaboration abilities Excellent interpersonal skills and a collaborative management style Own and delegate responsibilities effectively Ability to analyse and suggest solutions Strong command on verbal and written English language Essential Roles and Responsibilities Work as a Data Architect and able to design and implement data architecture for projects having complex data such as Big Data, Data lakes etc Work with the customers to define strategy for data architecture and data governance Guide the team to implement solutions around data engineering Proactively identify risks and communicate to stakeholders. Develop strategies to mitigate risks Build best practices to enable faster service delivery Build reusable components to reduce cost Build scalable and cost effective architecture EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 3 days ago
12.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Description Automation Title Data Architect Type of Employment Permanent Overall Years Of Experience 12-15 years Relevant Years Of Experience 10+ Data Architect Data Architect is responsible for designing and implementing data architecture for multiple projects and also build strategies for data governance Position Summary 12 – 15 yrs of experience in a similar profile with strong service delivery background Experience as a Data Architect with a focus on Spark and Data Lake technologies. Experience in Azure Synapse Analytics Proficiency in Apache Spark for large-scale data processing. Expertise in Databricks, Delta Lake, Azure data factory, and other cloud-based data services. Strong understanding of data modeling, ETL processes, and data warehousing principles. Implement a data governance framework with Unity Catalog . Knowledge in designing scalable streaming data pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Hands on Experience in python and relevant libraries such as pyspark, numpy etc Knowledge of Machine Learning pipelines, GenAI, LLM will be plus Excellent analytical, problem-solving, and technical leadership skills. Experience in integration with business intelligence tools such as Power BI Effective communication and collaboration abilities Excellent interpersonal skills and a collaborative management style Own and delegate responsibilities effectively Ability to analyse and suggest solutions Strong command on verbal and written English language Essential Roles and Responsibilities Work as a Data Architect and able to design and implement data architecture for projects having complex data such as Big Data, Data lakes etc Work with the customers to define strategy for data architecture and data governance Guide the team to implement solutions around data engineering Proactively identify risks and communicate to stakeholders. Develop strategies to mitigate risks Build best practices to enable faster service delivery Build reusable components to reduce cost Build scalable and cost effective architecture EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 3 days ago
10.0 - 15.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Req ID: 332901 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Business Intelligence Advisor to join our team in Pune, Mahārāshtra (IN-MH), India (IN). The Business Intelligence Advisor will utilize analytical, statistical, and programming skills to collect, analyze, and interpret large volume data sets and use this information to develop data-driven solutions for addressing difficult business challenges. Monitor, analyze and report business performance, financial results and other KPIs defined by business. Responsibilities Analyze spending patterns, identify cost-savings opportunities, and provide actionable insights to improve decision making. Collaborate with cross-functional teams, interpret data trends, and develop dashboards or reports to communicate findings. Understand user requirements, translate complex data into user-friendly reports ensuring data accuracy. Create visually compelling and insightful reports, interactive dashboards connecting Tableau or Power BI to SQL and Snowflake. Lead or participate in multiple analytical projects or ad-hoc analysis by completing and updating project documentation, managing project scope, adjusting schedules when necessary, determining daily priorities, ensuring efficient and on-time delivery of project tasks and milestones. Perform exploratory data analysis (EDA) to uncover trends, patterns, and insights. Apply statistical techniques and advanced analytical methods to solve business problems. Utilize Python libraries such as Pandas, NumPy, Matplotlib for data analysis, visualization, and modeling. Automate data processing and analysis workflows using Python. Perform Spend analysis by gathering, cleansing, classifying, transforming procurement spend data, providing spend visibility, to facilitate category and spend management. Data enrichment/gap fill, standardization, normalization, and categorization of Spend data via research through different sources like internet, specific websites, database etc. Data quality check and correction, process, clean, and verify the integrity of data used for analysis. Stay updated on industry trends and optimize BI tools for efficient performance. Write optimized SQL & Snowflake queries for data extraction as well as integration with other applications. Design workflows in Alteryx Designer to develop models using data modeling techniques as per requirements. Create automated anomaly detection systems and constant tracking of its performance. Create and maintain the documentation of the architecture, data models and maintenance activities. Continuous process improvement and efficiency gain using automation or any other process standardization techniques. Technical Skills & Competencies Must have experience working on business analytics or spend analytics projects as well as handling day-to-day operational requests from the business. Ability to successfully manage multiple tasks at any given point, strong relationship building skills & communication skills. High proficiency with Microsoft Excel. Visualization capabilities Power BI / Tableau. Knowledge about Alteryx Designer tool and Snowflake is preferred. Experience in requirements gathering and analysis and defining the implementation roadmap. Ability to work remotely with key stakeholders and business partners. Preferred to have skills of Project coordination & management. Self-motivated with a high degree of learning agility and a team player. Experience & Education Bachelor’s degree in information science, Computer Science, Mathematics, Statistics, or a quantitative discipline in science. Advanced degree preferred. Minimum 10-15 years of work experience in the fields of data management and analysis. At least 5 years of work experience in procurement data management, spend analysis, RFP/RPQ/quote analysis. Demonstrated experience with data architecture, data integration/ETL, data warehousing, and/or business intelligence deployed in a complex environment. Demonstrated experience in Python programming for data manipulation, analysis, and visualization. Prior experience working on Reporting/Visualization Tools such as Power BI, Tableau. Must have excellent presentation & communication (written and verbal) skills. Good research and logical skills. Strong data collection, consolidation, and cleansing skills. Ability to scope, plan and execute assigned projects in a fast-paced environment. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
Posted 3 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description: We are seeking a highly motivated and enthusiastic Senior Data Scientist with over 4 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities: Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills: Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios.
Posted 3 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Solution Architect (India) Work Mode: Remote/ Hybrid Required exp: 10+ years Shift timing: Minimum 4 hours overlap required with US time Role Summary: The Solution Architect is responsible for designing robust, scalable, and high- performance AI and data-driven systems that align with enterprise goals. This role serves as a critical technical leader—bridging AI/ML, data engineering, ETL, cloud architecture, and application development. The ideal candidate will have deep experience across traditional and generative AI, including Retrieval- Augmented Generation (RAG) and agentic AI systems, along with strong fundamentals in data science, modern cloud platforms, and full-stack integration. Key Responsibilities: Design and own the end-to-end architecture of intelligent systems including data ingestion (ETL/ELT), transformation, storage, modeling, inferencing, and reporting. Architect GenAI-powered applications using LLMs, vector databases, and RAG pipelines; Agentic Workflow, integrate with enterprise knowledge graphs and document repositories. Lead the design and deployment of agentic AI systems that can plan, reason, and interact autonomously within business workflows. Collaborate with cross-functional teams including data scientists, data engineers, MLOps, and frontend/backend developers to deliver scalable and maintainable solutions. Define patterns and best practices for traditional ML and GenAI projects, covering model governance, explainability, reusability, and lifecycle management. Ensure seamless integration of ML/AI systems via RESTful APIs with frontend interfaces (e.g., dashboards, portals) and backend systems (e.g., CRMs, ERPs). Architect multi-cloud or hybrid cloud AI solutions, leveraging services from AWS, Azure, or GCP for scalable compute, storage, orchestration, and deployment. Provide technical oversight for data pipelines (batch and real-time), data lakes, and ETL frameworks ensuring secure and governed data movement. Conduct architecture reviews, mentor engineering teams, and drive design standards for AI/ML, data engineering, and software integration. Qualifications : Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 10+ years of experience in software architecture, including at least 4 years in AI/ML-focused roles. Required Skills: Expertise in machine learning (regression, classification, clustering), deep learning (CNNs, RNNs, transformers), and NLP. Experience with Generative AI frameworks and services (e.g., OpenAI, LangChain, Azure OpenAI, Amazon Bedrock). Strong hands-on Python skills, with experience in libraries such as Scikit-learn, Pandas, NumPy, TensorFlow, or PyTorch. Proficiency in RESTful API development and integration with frontend components (React, Angular, or similar is a plus). Deep experience in ETL/ELT processes using tools like Apache Airflow, Azure Data Factory, or AWS Glue. Strong knowledge of cloud-native architecture and AI/ML services on either one of the cloud AWS, Azure, or GCP. Experience with vector databases (e.g., Pinecone, FAISS, Weaviate) and semantic search patterns. Experience in deploying and managing ML models with MLOps frameworks (MLflow, Kubeflow). Understanding of microservices architecture, API gateways, and container orchestration (Docker, Kubernetes). Having forntend exp is good to have.
Posted 3 days ago
1.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About BNP Paribas Group BNP Paribas is a top-ranking bank in Europe with an international profile. It operates in 71 countries and has almost 199 000 employees . The Group ranks highly in its three core areas of activity: Domestic Markets and International Financial Services (whose retail banking networks and financial services are grouped together under Retail Banking & Services) and Corporate & Institutional Banking, centred on corporate and institutional clients. The Group helps all of its clients (retail, associations, businesses, SMEs, large corporates and institutional) to implement their projects by providing them with services in financing, investment, savings and protection. In its Corporate & Institutional Banking and International Financial Services activities, BNP Paribas enjoys leading positions in Europe, a strong presence in the Americas and has a solid and fast-growing network in the Asia/Pacific region. About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, a leading bank in Europe with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 6000 employees, to provide support and develop best-in-class solutions. About Business Line/Function GM Data & AI Lab leverages the power of Machine Learning and Deep learning to drive innovation in various business lines. Our primary goal is to harness the potential of vast amounts of structured and unstructured data to improve our services and provide value. Today we are a team of around 40+ Data Scientists based in Paris, London, Frankfurt, Lisbon, New York, Singapore and Mumbai. Job Title Data Scientist (Tabular & Text) Date Department: Front Office Support Location: Mumbai Business Line / Function Global Markets – Data & AI Lab Reports To (Direct) Grade (if applicable) (Functional) Number Of Direct Reports NA Directorship / Registration NA Position Purpose Your work will span across multiple areas, including predictive modelling, automation and process optimization. We use AI to discover patterns, classify information, and predict likelihoods. Our team works on building, refining, testing, and deploying these models to support various business use cases, ultimately driving business value and innovation. As a Data Scientist on our team, you can expect to work on challenging projects, collaborate with stakeholders to identify business problems, and have the opportunity to learn and grow with our team. A typical day may involve working on model development, meeting with stakeholders to discuss project requirements/updates, and brainstorming/debugging with colleagues on various technical aspects. At the Lab, we're passionate about staying at the forefront of AI research, bridging the gap between research & industry to drive innovation and to make a real impact on our businesses. Responsibilities Develop and maintain AI models from inception to deployment, including data collection, analysis, feature engineering, model development, evaluation, and monitoring. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. Working with expert colleagues and business representatives to examine the results and keep models grounded in reality. Documenting each step of the development and informing decision makers by presenting them options and results. Ensure the integrity and security of data. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Technical & Behavioral Competencies Qualifications: Bachelors / Master / PhD degree in Computer Science / Data Science / Mathematics / Statistics / relevant STEM field. Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. Experience with Machine Learning & Deep Learning concepts including data representations, neural network architectures, custom loss functions. Proven track record of building AI model from scratch or finetuning on large models for Tabular or/and Textual data. Programming skills in Python and knowledge of common numerical and machine-learning packages (like NumPy, scikit-learn, pandas, PyTorch, transformers, langchain). Ability to write clear and concise code in python. Intellectually curious and willing to learn challenging concepts daily. Knowledge of current Machine Learning/Artificial Intelligence literature. Skills Referential Behavioural Skills: Ability to collaborate / Teamwork Critical thinking Communication skills - oral & written Attention to detail / rigor Transversal Skills Analytical Ability Education Level Bachelor Degree or equivalent Experience Level At least 1 year
Posted 3 days ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About BNP Paribas Group BNP Paribas is a top-ranking bank in Europe with an international profile. It operates in 71 countries and has almost 199 000 employees . The Group ranks highly in its three core areas of activity: Domestic Markets and International Financial Services (whose retail banking networks and financial services are grouped together under Retail Banking & Services) and Corporate & Institutional Banking, centred on corporate and institutional clients. The Group helps all of its clients (retail, associations, businesses, SMEs, large corporates and institutional) to implement their projects by providing them with services in financing, investment, savings and protection. In its Corporate & Institutional Banking and International Financial Services activities, BNP Paribas enjoys leading positions in Europe, a strong presence in the Americas and has a solid and fast-growing network in the Asia/Pacific region. About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, a leading bank in Europe with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 6000 employees, to provide support and develop best-in-class solutions. About Business Line/Function GM Data & AI Lab leverages the power of Machine Learning and Deep learning to drive innovation in various business lines. Our primary goal is to harness the potential of vast amounts of structured and unstructured data to improve our services and provide value. Today we are a team of around 40+ Data Scientists based in Paris, London, Frankfurt, Lisbon, New York, Singapore and Mumbai. Job Title Data Scientist (Quant Research) Date Department: Front Office Support Location: Mumbai Business Line / Function Global Markets – Data & AI Lab Reports To (Direct) Grade (if applicable) (Functional) Number Of Direct Reports NA Directorship / Registration NA Position Purpose Your work will center on designing and deploying AI solutions for time series forecasting, financial modeling, anomaly detection and other Quant related usecases. We use AI to discover patterns, classify information, and predict likelihoods. Our team works on building, refining, testing, and deploying these models to support various business use cases, ultimately driving business value and innovation. As a Data Scientist on our team, you can expect to work on challenging projects, collaborate with stakeholders to identify business problems, and have the opportunity to learn and grow with our team. A typical day may involve working on model development, meeting with stakeholders to discuss project requirements/updates, and brainstorming/debugging with colleagues on various technical aspects. At the Lab, we're passionate about staying at the forefront of AI research, bridging the gap between research & industry to drive innovation and to make a real impact on our businesses. Responsibilities Develop and maintain AI models on time series & financial date for predictive modelling, including data collection, analysis, feature engineering, model development, evaluation, backtesting and monitoring. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. Working with expert colleagues, Quant and business representatives to examine the results and keep models grounded in reality. Documenting each step of the development and informing decision makers by presenting them options and results. Ensure the integrity and security of data. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Technical & Behavioral Competencies Bachelor’s or Master’s degree in a numeric subject with understanding of economics and markets (eg.: Economics with a speciality in Econometrics, Finance, Computer Science, Applied Maths, Engineering, Physics) Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. Knowledge of Monte Carlo Simulations, Bayesian modelling & Causal Inference. Experience with Machine Learning & Deep Learning concepts including data representations, neural network architectures, custom loss functions. Proven track record of building AI models on time-series & financial data. Programming skills in Python and knowledge of common numerical and machine-learning packages (like NumPy, scikit-learn, pandas, PyTorch, PyMC, statsmodels). Ability to write clear and concise code in python. Intellectually curious and willing to learn challenging concepts daily. Skills Referential Behavioural Skills: Ability to collaborate / Teamwork Critical thinking Communication skills - oral & written Attention to detail / rigor Transversal Skills Analytical Ability Education Level Bachelor Degree or equivalent Experience Level At least 2 years
Posted 3 days ago
0.0 years
0 - 0 Lacs
Thiruvananthapuram, Kerala
On-site
Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 3 days ago
0 years
0 Lacs
India
Remote
🤖 Data Science Intern – Remote | Real Projects, Real Skills, Real Impact 📍 Location: Remote / Virtual 💼 Type: Internship (Unpaid) 🎁 Perks: Certificate after Completion || Letter of Recommendation (6 Months) 🕒 Schedule: Flexible (5–7 hours/week) Are you passionate about data, AI, and problem-solving? Join Skillfied Mentor and step into the world of data science with real projects, hands-on mentoring, and practical tools. This virtual internship is designed for students and fresh graduates who want to build real experience working on machine learning models , data pipelines , and predictive analytics . 🔧 What You’ll Do: Clean and prepare data for analysis and modeling Build and train basic machine learning models using Python Work on tools like Pandas, NumPy, Scikit-Learn, Jupyter Present findings in visual formats using Matplotlib/Seaborn Collaborate with peers and mentors during project reviews 🎓 What You’ll Gain: ✅ Full Python Course included during internship ✅ Hands-on experience with real-world ML and DS projects ✅ Internship Certificate + LOR (6 Months) ✅ Projects that enhance your resume and portfolio ✅ Work with tools & libraries used in the industry ✅ Fully remote – manage your schedule (5–7 hrs/week) 🗓️ Application Deadline: 1st August 2025 👉 Apply now and launch your Data Science journey with Skillfied Mentor!
Posted 3 days ago
8.0 years
0 Lacs
India
Remote
Quant Engineer Location: Bangalore(Remote) Fulltime Quant Engineer Job Description: Strong Python developer with up-to-date skills, including web development, cloud (ideally Azure), Docker, testing , devops (ideally terraform + github actions). Data engineering (pyspark, lakehouses, kafka) is a plus. Good understanding of maths, finance as role interacts with quant devs, analysts and traders. Familiarity with e.g. PnL, greeks, volatility, partial derivative, normal distribution etc. Financial and/or trading exposure is nice to have, particularly energy commodities Productionise quant models into software applications, ensuring robust day to day operation, monitoring and back testing are in place Translate trader or quant analyst’s need into software product requirements Prototype and implement data pipelines Co-ordinate closely with analysts and quants during development of models, acting as a technical support and coach Produce accurate, performant, scalable, secure software, and support best practices following defined IT standards Transform proof of concepts into a larger deployable product in Shell and outside. Work in a highly-collaborative, friendly Agile environment, participate in Ceremonies and Continuous Improvement activities. Ensuring that documentation and explanations of results of analysis or modelling are fit for purpose for both a technical and non-technical audience Mentor and coach other teammates who are upskilling in Quants Engineering Professional Qualifications & Skills Educational Qualification Graduation / postgraduation /PhD with 8+ years’ work experience as software developer /data scientist. Degree level in STEM, computer science, engineering, mathematics, or a relevant field of applied mathematics. Good understanding of Trading terminology and concepts (incl. financial derivatives), gained from experience working in a Trading or Finance environment. Required Skills Expert in core Python with Python scientific stack / ecosystem (incl pandas, numpy, scipy, stats), and a second strongly typed language (e.g.: C#, C++, Rust or Java). Expert in application design, security, release, testing and packaging. Mastery of SQL / no-SQL databases, data pipeline orchestration tools. Mastery of concurrent/distributed programming and performance optimisation methods
Posted 3 days ago
2.0 - 5.0 years
0 Lacs
Mohali district, India
Remote
Job Description: SDE-II – Python Developer Job Title SDE-II – Python Developer Department Operations Location In-Office Employment Type Full-Time Job Summary We are looking for an experienced Python Developer to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks. This role requires a strong foundation in object-oriented programming, web technologies, and collaborative software development. You will work closely with the design, frontend, and DevOps teams to deliver robust and high-performance solutions. Key Responsibilities • Develop, test, and maintain backend applications using Django, Flask, or FastAPI. • Build RESTful APIs and integrate third-party services to enhance platform capabilities. • Utilize data handling libraries like Pandas and NumPy for efficient data processing. • Write clean, maintainable, and well-documented code that adheres to industry best practices. • Participate in code reviews and mentor junior developers. • Collaborate in Agile teams using Scrum or Kanban workflows. • Troubleshoot and debug production issues with a proactive and analytical approach. Required Qualifications • 2 to 5 years of experience in backend development with Python. • Proficiency in core and advanced Python concepts, including OOP and asynchronous programming. • Strong command over at least one Python framework (Django, Flask, or FastAPI). • Experience with data libraries like Pandas and NumPy. • Understanding of authentication/authorization mechanisms, middleware, and dependency injection. • Familiarity with version control systems like Git. • Comfortable working in Linux environments. Must-Have Skills • Expertise in backend Python development and web frameworks. • Strong debugging, problem-solving, and optimization skills. • Experience with API development and microservices architecture. • Deep understanding of software design principles and security best practices. Good-to-Have Skills • Experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs). • Exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch). • Knowledge of containerization tools (Docker, Kubernetes). • Familiarity with web servers (e.g., Apache, Nginx) and deployment architectures. • Understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO). • Familiarity with Agile practices and tools like Jira or Trello. • Exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Company Overview We specialize in delivering cutting-edge solutions in custom software, web, and AI development. Our work culture is a unique blend of in-office and remote collaboration, prioritizing our employees above everything else. At our company, you’ll find an environment where continuous learning, leadership opportunities, and mutual respect thrive. We are proud to foster a culture where individuals are valued, encouraged to evolve, and supported in achieving their fullest potential. Benefits and Perks • Competitive Salary: Earn up to ₹6 –10 LPA based on skills and experience. • Generous Time Off: Benefit from 18 annual holidays to maintain a healthy work-life balance. • Continuous Learning: Access extensive learning opportunities while working on cutting-edge projects. • Client Exposure: Gain valuable experience in client-facing roles to enhance your professional growth.
Posted 3 days ago
6.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Basic understanding of machine learning frameworks such as TensorFlow, PyTorch, or Hugging Face. • Knowledge of neural network architectures, particularly in areas like Transformers and basic deep learning models. • Familiarity with Python programming and essential ML libraries (NumPy, Pandas, Scikit-learn). • Exposure to NLP (Natural Language Processing) concepts and basic text processing tasks. • Some experience with cloud platforms (AWS, GCP, or Azure) for deploying simple AI models. • Understanding of basic databases and their integration with AI systems (NoSQL or SQL databases). Soft Skills: • Strong eagerness to learn and adapt to new technologies in the AI and machine learning field. • Ability to work under guidance and collaborate within a team environment. • Good problem-solving abilities and analytical thinking. • Effective communication skills to discuss technical issues and progress with the team. Education and Experience: • Bachelor’s degree in computer science, Data Science, Artificial Intelligence, or related fields. • 6-9 years of experience in machine learning, AI, or related areas (internships or academic projects are a plus). Preferred Skills (Nice-to-Have): • Exposure to basic machine learning deployment tools or practices. • Familiarity with any vector databases (ChromaDB, Pinecone, Weaviate, Milvus, FAISS) or graph databases (Neo4j, TigerGraph). • Interest in generative AI or graph-based AI solutions. • Involvement in open-source projects or personal contributions to machine learning communities. • Understanding of ethical AI principles or data privacy basics. Role Summary: As a Junior Machine Learning Developer, you will be part of a dynamic team working on cutting-edge AI and machine learning solutions. This role offers an exciting opportunity for a motivated individual to learn and grow their skills in a fast-paced, collaborative environment. You will assist senior developers in developing, testing, and deploying AI models, while gaining hands-on experience with machine learning frameworks and real-world AI applications.
Posted 3 days ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
As an expectation a fitting candidate must have/be: Ability to analyze business problem and cut through the data challenges. Ability to churn the raw corpus and develop a data/ML model to provide business analytics (not just EDA), machine learning based document processing and information retrieval Quick to develop the POCs and transform it to high scale production ready code. Experience in extracting data through complex unstructured documents using NLP based technologies. Good to have : Document analysis using Image processing/computer vision and geometric deep learning Technology Stack: Python as a primary programming language. Conceptual understanding of classic ML/DL Algorithms like Regression, Support Vectors, Decision tree, Clustering, Random Forest, CART, Ensemble, Neural Networks, CNN, RNN, LSTM etc. Programming: Must Have: Must be hands-on with data structures using List, tuple, dictionary, collections, iterators, Pandas, NumPy and Object-oriented programming Good to have: Design patterns/System design, cython ML libraries: Must Have: Scikit-learn, XGBoost, imblearn, SciPy, Gensim Good to have: matplotlib/plotly, Lime/sharp Data extraction and handling: Must Have: DASK/Modin, beautifulsoup/scrappy, Multiprocessing Good to have: Data Augmentation, Pyspark, Accelerate NLP/Text analytics: Must Have: Bag of words, text ranking algorithm, Word2vec, language model, entity recognition, CRF/HMM, topic modelling, Sequence to Sequence Good to have: Machine comprehension, translation, elastic search Deep learning: Must Have: TensorFlow/PyTorch, Neural nets, Sequential models, CNN, LSTM/GRU/RNN, Attention, Transformers, Residual Networks Good to have: Knowledge of optimization, Distributed training/computing, Language models Software peripherals: Must Have: REST services, SQL/NoSQL, UNIX, Code versioning Good to have: Docker containers, data versioning Research: Must Have: Well verse with latest trends in ML and DL area. Zeal to research and implement cutting areas in AI segment to solve complex problems Good to have: Contributed to research papers/patents and it is published on internet in ML and DL Morningstar is an equal opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
Design & develop AI/ML solutions to be flexible, scalable & extensible. Build solutions that incorporates cognitive techniques such as statistics, machine leaning, NLP, deep learning, transfer learning Hands-on in python libraries like pandas, numpy, scikit-learn, scipy, nltk, tensorflow, keras, pytorch etc. Excellent at feature engineering, Exploratory data analysis and data visualization Keen to work on cloud-based ML solutions using AWS,GCP,Azure,IBM Watson etc. Innovative, studies research papers and implement them. Curious to learn latest algorithms and concept and use them professionally Develop areas of continuous and automated deployment Introduce and follow good development practices, innovative frameworks and technology solutions that help business move faster Be a Role Model to the team to collaborate on good object-oriented designs & domain modeling. 3-5 years of experience in data science, machine learning field An advanced degree in engineering, computer science, statistics or related field is preferred Expertise with data science and python, machine learning, NLP and deep learning is essential Experience with back-end XML, relational databases (e.g. MS SQL, Postgres) Experience developing and deploying solutions using services in the Amazon AWS ecosystem (Serverless Lambda, EC2, RDS) Experience of publishing the research paper in renowned forums Morningstar is an equal opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity
Posted 3 days ago
5.0 years
0 Lacs
Salem, Tamil Nadu, India
On-site
Description / Position Overview This is a key position for our client to help create data-driven technology solutions that will position us as the industry leader in healthcare, financial, and clinical administration. This hands-on Lead Data Scientist role will focus on building and implementing machine learning models and predictive analytics solutions that will drive the new wave of AI-powered innovation in healthcare. You will be the lead data science technologist responsible for developing and implementing a multitude of ML/AI products from concept to production, helping us gain a competitive advantage in the market. Alongside our Director of Data Science, you will work at the intersection of healthcare, finance, and cutting-edge data science to solve some of the industry's most complex challenges. This is a greenfield opportunity within VHT’s Product Transformation division, where you'll build groundbreaking machine learning capabilities from the ground up. You'll have the chance to shape the future of VHT’s data science & analytics foundation while working with cutting-edge tools and methodologies in a collaborative, innovation-driven environment. Key Responsibilities As the Lead Data Scientist, your role will require you to work closely with subject matter experts in clinical and financial administration across practices, health systems, hospitals, and payors. Your machine learning projects will span the entire healthcare revenue cycle - from clinical encounters through financial transaction completion, extending into back-office operations and payer interactions. You will lead the development of predictive machine learning models for Revenue Cycle Management analytics, along the lines of: • Payer Propensity Modeling - predicting payer behavior and reimbursement likelihood • Claim Denials Prediction - identifying high-risk claims before submission • Payment Amount Prediction - forecasting expected reimbursement amounts • Cash Flow Forecasting - predicting revenue timing and patterns • Patient-Related Models - enhancing patient financial experience and outcomes • Claim Processing Time Prediction - optimizing workflow and resource allocation Additionally, we will work on emerging areas and integration opportunities—for example, denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. You will reimagine how providers, patients, and payors interact within the healthcare ecosystem through intelligent automation and predictive insights, ensuring that providers can focus on delivering the highest quality patient care. VHT Technical Environment • Cloud Platform : AWS (SageMaker, S3, Redshift, EC2) • Development Tools : Jupyter Notebooks, Git, Docker • Programming : Python, SQL, R (optional) • ML/AI Stack : Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow • Data Processing : Spark, Pandas, NumPy • Visualization : Matplotlib, Seaborn, Plotly, Tableau Required Qualifications • Advanced degree in Data Science, Statistics, Computer Science, Mathematics, or a related quantitative field • 5+ years of hands-on data science experience with a proven track record of deploying ML models to production • Expert-level proficiency in SQL and Python , with extensive experience using standard Python machine learning libraries (scikit-learn, pandas, numpy, matplotlib, seaborn, etc.) • Cloud platform experience, preferably AWS, with hands-on knowledge of SageMaker, S3, Redshift, and Jupyter Notebook workbenches (other cloud environments acceptable) • Strong statistical modeling and machine learning expertise across supervised and unsupervised learning techniques • Experience with model deployment, monitoring, and MLOps practices • Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders Preferred Qualifications • US Healthcare industry experience , particularly in Health Insurance and/or Medical Revenue Cycle Management • Experience with healthcare data standards (HL7, FHIR, X12 EDI) • Knowledge of healthcare regulations (HIPAA, compliance requirements) • Experience with deep learning frameworks (TensorFlow, PyTorch) • Familiarity with real-time streaming data processing • Previous leadership or mentoring experience
Posted 3 days ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Job Description The development engineer will be part of a team working on the development of the Illustrator product in our creative suite of products. They will be responsible for the development of new features and maintenance of existing features and will be responsible for all phases of development, from early specs and definition to release. They are expected to be hands-on problem solvers and well conversant in analyzing, architecting,, and implementing high-quality software. Requirements B.Tech. / M.Tech. in Computer Science from a premiere institute. Should have excellent knowledge in the the fundamentals of machine learning and artificial intelligence. Should have hands on experience through ML lifecycle from EDA to model deployment. Should have hands on experience data analysis tools like Jupyter, and packages like Numpy, Matplotlib etc. Should be hands-on in writing code that is reliable, maintainable, secure, performance optimized, multi-platform and world-ready Familiarity with state-of-art deep learning frameworks, such as Tensorflow, PyTorch, Keras, Caffe, Torch. Strong programming skills in C/C++ and Python. Hands-on experience with data synthesis and processing for the purpose of training a model. Relevant work experience in the fields of computer vision and graphics, etc. Experience: 2-4 Years in ML Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Exela Exela Technologies, Inc. (“Exela”) is a location agnostic, global business process automation ("BPA") leader combining industry-specific and industry-agnostic enterprise software and solutions with decades of experience. Our BPA suite of solutions are deployed in banking, healthcare, insurance and other industries to support mission critical environments. Exela is a leader in workflow automation, attended and unattended cognitive automation, digital mail rooms, print communications, and payment processing with deployments across the globe. Exela partners with customers to improve user experience and quality through operational efficiency. Exela serves over 3,700 customers across more than 50 countries, through a secure, cloud-enabled global delivery model. We are 22,000 employees strong across the Americas, Europe and Asia. Our customer list includes 60% of the Fortune® 100, along with many of the world’s largest retail chains, banks, law firms, healthcare insurance payers and providers and telecom companies. Why Exela? A global, public company (Nasdaq: XELA), the people behind Exela are as important as the company itself. Our The team's extensive experience across multiple industry verticals gives us a better sense of our clients' needs. That begins with teams comprised of individuals from diverse backgrounds with different perspectives. Join our global team as we create advancements in business process automation solutions that impact our client’s mission-critical operations across the industries they serve. The diversity of our workforce and their inspiring ideas resonate throughout all that we do – don’t just read about digital innovation, be part of the revolution! We are seeking a highly skilled and experienced Senior Lead Engineer to join our dynamic team. In this role, you will play a crucial part in designing, implementing, and supporting F5 Networks' solutions for our internal / external clients. Designation: Data Science Engineer Experience: 5+ years. Desired Skills and Experience: We are seeking a skilled and innovative Data Science Engineer to join our dynamic team. You will be responsible for designing, developing, and deploying scalable data science solutions to address real-world problems. The ideal candidate is experienced in building end-to-end machine learning pipelines, integrating data science solutions into production systems, and working collaboratively with cross-functional teams to derive business value from data. Key Responsibilities: Data Engineering : Develop and maintain ETL pipelines to pre-process and clean data from various sources. Design and optimize data storage solutions for analytics and machine learning use cases. Collaborate with data engineers to ensure seamless data flow. Machine Learning & Analytics : Develop, train, and deploy machine learning models to solve business challenges. Conduct exploratory data analysis (EDA) to derive actionable insights. Monitor and retrain models to ensure performance and relevance over time. Production Deployment : Design scalable, reliable, and efficient systems to deploy machine learning models into production environments. Implement APIs or integrate models into existing business workflows. Collaboration & Communication : Work closely with product managers, data analysts, and business stakeholders to align technical solutions with business objectives. Present findings, insights, and solutions to both technical and non-technical audiences. Continuous Improvement: Stay up-to-date with advancements in data science, machine learning, and big data technologies. Conduct peer code reviews and ensure adherence to best practices. Required: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 5+ years of experience in data science, machine learning, or related roles. Experience in Healthcare domain project is must. Proficiency in programming languages such as Python, R, or Java. Hands-on experience with data manipulation libraries (e.g., Pandas, NumPy) and machine learning frameworks (e.g., Scikit-learn, TensorFlow, PyTorch). Expertise in SQL and working with relational databases. Experience with cloud platforms (AWS, Azure, GCP) and containerization tools (e.g., Docker, Kubernetes). Familiarity with version control tools (e.g., Git). Knowledge of containerization tools like Docker and orchestration tools like Kubernetes is a plus. Familiarity with CI/CD pipelines and version control systems like Git. Understanding of data governance, security, and compliance requirements. Experience with big data technologies (e.g., Spark, Hadoop). Knowledge of deep learning techniques and natural language processing (NLP). Strong understanding of MLOps and model lifecycle management. Proficiency in data visualization tools (e.g., Tableau, Power BI, Matplotlib).
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Join us to lead data modernization and maximize analytics utility. As a Data Owner Lead at JPMorgan Chase within the Data Analytics team, you play a crucial role in enabling the business to drive faster innovation through data. You are responsible for managing customer application and account opening data, ensuring its quality and protection, and collaborating with technology and business partners to execute data requirements. Your primary job responsibilities include documenting data requirements for your product and coordinating with technology and business partners to manage change from legacy to modernized data. You will have to model data for efficient querying and use in LLMs by utilizing the business data dictionary and metadata. Moreover, you are expected to develop ideas for data products by understanding analytics needs and creating prototypes for productizing datasets. Additionally, developing proof of concepts for natural language querying and collaborating with stakeholders to rollout capabilities will be part of your tasks. You will also support the team in building backlog, grooming initiatives, and leading data engineering scrum teams. Managing direct or matrixed staff to execute data-related tasks will also be within your purview. To be successful in this role, you should hold a Bachelor's degree and have at least 5 years of experience in data modeling for relational, NoSQL, and graph databases. Expertise in data technologies such as analytics, business intelligence, machine learning, data warehousing, data management & governance, and AWS cloud solutions is crucial. Experience with natural language processing, machine learning, and deep learning toolkits (like TensorFlow, PyTorch, NumPy, Scikit-Learn, Pandas) is also required. Furthermore, you should possess the ability to balance short-term goals and long-term vision in complex environments, along with knowledge of open data standards, data taxonomy, vocabularies, and metadata management. A Master's degree is preferred for this position, along with the aforementioned qualifications, capabilities, and skills.,
Posted 3 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Your Team’s Impact The News Content team is responsible for ingesting, maintaining and delivering a variety of unstructured text-based content sets for use across all FactSet applications and workflows. These content sets need to be processed and delivered to our clients in real-time. New feed integration involves working with Product Development to understand the requirements of the feeds as well as working with the vendor to understand the technical specification for ingesting the feeds. Work on existing feeds includes bug fixes, feature enhancements, infrastructure improvements, maintaining data quality, and ensuring feeds are operating properly throughout the day. You will work on both internal and external client-facing applications that shape the user's experience and drive FactSet's growth through technological innovations. What You’II Do FactSet is seeking for a Python Developer along with experienced AWS Developer to join our engineering team responsible for making our product more scalable and reliable. Deliver high quality, re-usable and maintainable code, perform Unit / Integration Testing of assigned tasks with in the estimated timelines. Build robust infrastructure in AWS appropriate to respective component of product Proactive in providing Technical solutions with effective communication & collaboration skills. Perform Code Reviews and ensure best practices are followed. Work in agile team environment and collaborate with internal teams to ensure smooth product delivery. Ownership on end-end product & Individual contribution. Ensure high stability of product. Continuous Knowledge sharing internal & external of Team. What We’re Looking For Bachelor or master’s degree in computer science 2-3 years of Total experience. Minimum 2 year of experience in Python development. Minimum of 2 years working experience in Linux/Unix environments Strong analytical and problem-solving skills. Strong experience and proficiency with Python, Pandas, Numpy. Experience with AWS components. Experience with Github-based development processes Excellent written and verbal communication skills. Organized, self-directed, and resourceful with the ability to appropriately prioritize work in a fast-paced environment Good to have skills: Familiar with Agile software development (Scrum is a plus). Experience in front end development Experience in database development Experience in C++ development Exposure on design patterns What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn More About Our Benefits here. Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn. Ex US: At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law. Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn. At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 3 days ago
4.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Experience Required: 4-7 years Education Qualification: BE/ B. Tech, MCA, MSc (Statistics), MBA from Recognized University Job Description We are looking for a Data Scientist to join our Data Science team. Data science drives all the products we develop. Our products are designed for small to mid-size financial institutions to help them create strategies based on data. The team is responsible for working on predictive model use cases and working closely with technical and functional stakeholders in an agile environment. Role & Responsibilities 3-6 years of relevant work experience in the Data Science/Analytics domain Work with the data scientists’ team, and data engineers. Take ownership of end-to-end data science projects, including problem formulation, data exploration, feature engineering, model development, validation, and deployment, ensuring high-quality deliverables that meet project objectives. Responsible for building analytic systems and predictive models as well as experimenting with new models and techniques. Collaborate with data architects and software engineers to enable deployment of sciences and technologies that will scale across the company’s ecosystem. Responsible for the conception, planning, and prioritizing of data projects Provide support to inexperienced analysts with high-level expertise in an open-source language (e.g., R, Python, etc.) Adhere to stringent quality assurance and documentation standards using version control and code repositories (e.g., Git, GitHub, Markdown) Utilize data visualization tools (Power BI) to deliver insights to stakeholders. Competencies and Technical Skills Degree holder in computer science or related discipline Proficiency in SQL and database querying for data extraction and manipulation. Proficiency in programming languages such as Python/R, and experience with data manipulation and analysis libraries (e.g., NumPy, pandas, scikit-learn). Familiarity with data visualization tools (e.g., Tableau, Power BI) and proficiency in presenting complex data visually. Solid understanding of experimental design, A/B testing, and statistical hypothesis testing. Proven hands-on experience with machine learning algorithms and parameter tuning, including Ensemble methods (Random Forest, XGBoost), Logistic Regression, Support Vector Machines (SVM), and clustering techniques (e.g., K-Means, DBSCAN). Familiarity with Generative AI concepts and AI Agents is a plus. Excellent verbal and written communication skills, with the ability to effectively convey complex concepts to both technical and non-technical stakeholders· Experience with Snowflake is not necessary but preferable.
Posted 3 days ago
6.0 - 8.0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Job Responsibilities Design, develop, and maintain automated Excel reports and dashboards using VBA Macros. Build Python-based scripts and applications for data extraction, transformation, analysis, and visualization. Develop reports on brokerage, volume, market share, volumes, ADV,etc, Collaborate with business units to gather reporting requirements and translate them into technical solutions. Optimize and automate manual processes to improve data accuracy, reduce turnaround times, and support scalable operations. Maintain documentation for all developed solutions and support end-user training. Ensure data integrity, security, and compliance with internal and external regulations. Manage a team of data analysts. Education & Experience Post graduate with overall experience of 6-8 years Experience in BI, Equity MIS, Knowledge of P &L Technical Skills Advanced Excel, VBA Macros, Proficient in Python – (NumPy, Pandas et), and SQL Familiarity with Power BI Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work independently and manage multiple tasks with tight deadlines
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough