Jobs
Interviews

27 H2O.Ai Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Job Role: Data Scientist / AI Solution Engineer– India contractor Band Level: NA Reports to: Team Leader/Manager Preferred Location : Gurugram, Haryana, India Work Timings: 11:30 am – 08 pm IST Implement Proof of Concept and Pilot machine learning solutions using AWS ML toolkit and SaaS platforms Configure and optimize pre-built ML models for specific business requirements Set up automated data pipelines leveraging AWS services and third-party tools Create dashboards and visualizations to communicate insights to stakeholders Document technical processes and knowledge transfer for future maintenance Requirements Bachelor’s degree in computer science, Data Science, or related field 1-3 years of professional experience implementing machine learning solutions. We can entertain someone who is fresh graduate with significant work in AI in either internship or projects Demonstrated experience with AWS machine learning services (SageMaker, AWS ML Services, and understanding of underpinnings of ML models and evaluations.) Proficiency with data science SaaS tools (Dataiku, Indico, H2O.ai, or similar platforms) Working knowledge of AWS data engineering services (S3, Glue, Athena, Lambda) Experience with Python and common data manipulation libraries Strong problem-solving skills and ability to work independently Preferred Qualifications Previous contract or work experience in similar roles Familiarity with API integration between various platforms Experience with BI tools (Power BI, QuickSight) Knowledge of cost optimization techniques for AWS ML services Prior experience in our industry (please see company overview)

Posted 1 week ago

Apply

1.0 - 3.0 years

4 - 8 Lacs

Hyderābād

On-site

Job Role: Data Scientist / AI Solution Engineer– India contractor Band Level: NA Reports to: Team Leader/Manager Preferred Location : Gurugram, Haryana, India Work Timings: 11:30 am – 08 pm IST - Implement Proof of Concept and Pilot machine learning solutions using AWS ML toolkit and SaaS platforms - Configure and optimize pre-built ML models for specific business requirements - Set up automated data pipelines leveraging AWS services and third-party tools - Create dashboards and visualizations to communicate insights to stakeholders - Document technical processes and knowledge transfer for future maintenance Requirements - Bachelor’s degree in computer science, Data Science, or related field - 1-3 years of professional experience implementing machine learning solutions. -We can entertain someone who is fresh graduate with significant work in AI in either internship or projects - Demonstrated experience with AWS machine learning services (SageMaker, AWS ML Services, and understanding of underpinnings of ML models and evaluations.) - Proficiency with data science SaaS tools (Dataiku, Indico, H2O.ai, or similar platforms) - Working knowledge of AWS data engineering services (S3, Glue, Athena, Lambda) - Experience with Python and common data manipulation libraries - Strong problem-solving skills and ability to work independently Preferred Qualifications - Previous contract or work experience in similar roles - Familiarity with API integration between various platforms - Experience with BI tools (Power BI, QuickSight) - Knowledge of cost optimization techniques for AWS ML services - Prior experience in our industry (please see company overview)

Posted 1 week ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Job Location: - Gurugram, Haryana, India Work Timings: 11:30 am – 08 pm IST Exp:- 3-10 y Requirements - Bachelor’s degree in computer science, Data Science, or related field - Experience in implementing machine learning solutions. -We can entertain someone who is fresh graduate with significant work in AI in either internship or projects - Demonstrated experience with AWS machine learning services (SageMaker, AWS ML Services, and understanding of underpinnings of ML models and evaluations.) - Proficiency with data science SaaS tools (Dataiku, Indico, H2O.ai, or similar platforms) - Working knowledge of AWS data engineering services (S3, Glue, Athena, Lambda) - Experience with Python and common data manipulation libraries - Strong problem-solving skills and ability to work independently Preferred Qualifications - Previous contract or work experience in similar roles - Familiarity with API integration between various platforms - Experience with BI tools (Power BI, QuickSight) - Knowledge of cost optimization techniques for AWS ML services - Prior experience in our industry (please see company overview)

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Founded in 2012, H2O.ai is on a mission to democratize AI. As the world’s leading agentic AI company, H2O.ai converges Generative and Predictive AI to help enterprises and public sector agencies develop purpose-built GenAI applications on their private data. Its open-source technology is trusted by over 20,000 organizations worldwide - including more than half of the Fortune 500 - H2O.ai powers AI transformation for companies like AT&T, Commonwealth Bank of Australia, Singtel, Chipotle, Workday, Progressive Insurance, and NIH. H2O.ai partners include Dell Technologies, Deloitte, Ernst & Young (EY), NVIDIA, Snowflake, AWS, Google Cloud Platform (GCP) and VAST. H2O.ai’s AI for Good program supports nonprofit groups, foundations, and communities in advancing education, healthcare, and environmental conservation. With a vibrant community of 2 million data scientists worldwide, H2O.ai aims to co-create valuable AI applications for all users. H2O.ai has raised $256 million from investors, including Commonwealth Bank, NVIDIA, Goldman Sachs, Wells Fargo, Capital One, Nexus Ventures and New York Life. About This Opportunity As a Customer Support Engineer at H2O.ai, you will play a critical role in ensuring our customers have a seamless experience deploying and managing machine learning models in production. You will be the bridge between our engineering teams and customers, providing hands-on technical support, troubleshooting complex issues, and helping users optimize their MLOps workflows. This role is ideal for someone who enjoys problem-solving, has a deep understanding of Kubernetes-based architectures, and thrives in a customer-facing environment. You will work closely with our developers, data scientists, and DevOps teams to ensure smooth deployments and high availability of models. Additionally, you will have the opportunity to contribute to our platform's evolution by providing valuable insights from customer interactions. If you're excited about working on cutting-edge cloud-native solutions, enabling real-time AI applications, and being part of a highly skilled team pushing the boundaries of MLOps, we’d love to hear from you! This position is based in Bangalore, India. What You Will Do Engage with customers to identify and troubleshoot issues they face and answer technical questions. Analyze bug reports, reproduce issues, determine the root cause, and provide fixes. Provide design for new features focusing on the approach and patterns to solve the problem. Responsible for writing code along with relevant unit test cases. Contribute to end-to-end automated tests by ensuring that the newer changes have relevant test cases. Knowledge of Python would be beneficial in this area. The company uses Kubernetes widely and working with Kubernetes would be an important aspect of the job. Review the design and code of other developers. Be engaged in the best software development practices and engage in improving the engineering efficiency of the team. Write Infrastructure Code using Terraform and Helm charts for any changes to infrastructure. What We Are Looking For Strong knowledge: Linux, Bash, File systems, Logging, Linux file permissions. 2+ years of Kubernetes and Docker. 2+ years of Software Engineering. At least one of: AWS, Azure, GCP. At least one of: GoLang, Python, C++, Java. Terraform and Helm. Relational Databases (e.g. SQL). Networking concepts and fundamentals (e.g. SSL, UDP/TCP, firewalls, and reverse proxies). Strong communication skills and comfortable being in a customer-facing role. Bachelor’s or a higher education degree in Computer Science/Engineering or related field. How to Stand Out From the Crowd Experience with gRPC. Experience with Scala and Akka. Experience with Druid Database. Understanding of mainstream machine Machine Learning frameworks such as sklearn, pytorch and data science libraries like numpy and pandas. Why H2O.ai? Market leader in total rewards Remote-friendly culture Flexible working environment Be part of a world-class team Career growth H2O.ai is committed to creating a diverse and inclusive culture. All qualified applicants will receive consideration for employment without regard to their race, ethnicity, religion, gender, sexual orientation, age, disability status or any other legally protected basis. H2O.ai is an innovative AI cloud platform company, leading the mission to democratize AI for everyone. Thousands of organizations from all over the world have used our cutting-edge technology across a variety of industries. We’ve made it easy for people at all levels to generate breakthrough solutions to complex business problems and advance the discovery of new ideas and revenue streams. We push the boundaries of what is possible with artificial intelligence. H2O.ai employs the world’s top Kaggle Grandmasters, the community of best-in-the-world machine learning practitioners and data scientists. A strong AI for Good ethos and responsible AI drive the company’s purpose. Please visit www.H2O.ai to learn more.

Posted 3 weeks ago

Apply

14.0 years

5 - 7 Lacs

Hyderābād

On-site

Senior Engineer, Database Engineering Hyderabad, India Information Technology 313668 Job Description About The Role: Grade Level (for internal use): 11 The Role: Sr. Engineer, Database Engineering. The Team: A highly skilled DBA with Performance Tuning & Data modeling & Data migration experience with a strong foundation in managing, optimizing, and securing enterprise-grade relational databases, combined with advanced capabilities in AI and machine learning. Proven experience in designing scalable database solutions, implementing high availability and disaster recovery strategies, and leveraging AI for predictive analytics, performance tuning, and automation. Bridges the gap between traditional database administration and modern data science by integrating AI-driven tools to proactively monitor systems, detect anomalies, and improve operational efficiency. Experienced in working with cross-functional teams including developers, data scientists, and cloud architects to deliver data-driven solutions aligned with business goals. The Performance Tuning Database Engineer who would focus on our database infrastructures estate and diagnose/re-write sql / pgsql / plsql / function & tune database queries within our enterprise solutions division. The Impact: This is an excellent opportunity to join Enterprise Solutions as we transform and harmonize our infrastructure into a unified place while also developing your skills and furthering your career as we plan to power the markets of the future. What’s in it for you: This is the place to hold your existing Database, Performance Tuning, Query Re-write & Data migration skills while having the chance to become exposed to fresh and divergent technologies (e.g. Postgres SQL/Oracle/My SQL/Ms SQL/Snowflake) Responsibilities: The Database Engineer will manage and optimize SQL Server, Oracle, PostgreSQL, MySQL databases hosted in AWS environments such as Amazon RDS, Amazon EC2, and Amazon Aurora. The role requires expertise in database performance tuning, security, and cloud-based best practices. The Performance Tuning Database Engineer will collaborate closely with architects, developers, system administrators, and DevOps teams to ensure efficient database integration with business applications. Database Management & Maintenance Creating logical and physical data models for databases and data warehouses, defining how data is structured, stored, and accessed Tune SQL Queries and stored procedures for optimal performance Implement database partitioning, compression, and indexing strategies for performance optimization. Maintain Performance Baselines and document tuning efforts and outcomes Identify bottlenecks in queries, indexes, execution plans and overall database design Proactively identify and resolve slow-running queries and blocking issues Administer and maintain SQL Server, Oracle, PostgreSQL, MySQL databases hosted on Amazon RDS and Amazon EC2. Configure and manage multi-AZ deployments, Read Replicas, and Data Guard (for Oracle) to ensure high availability. Manage SQL Server Always On configurations for improved failover and availability. Application Support Work closely with developers to design and maintain optimized database schemas for Postgres/ SQL Server/ Oracle/MySql Support application teams in writing efficient queries, stored procedures, and functions. Troubleshoot performance bottlenecks, locking issues, and query deadlocks. Assist developers in leveraging SQL Server Profiler, Oracle AWR, and AWS Performance Insights for tuning queries. Performance Tuning & Optimization Utilize AWS tools like CloudWatch, Performance Insights, and CloudTrail for proactive database monitoring. Conduct execution plan analysis and implement query optimizations to improve response times. Tune Oracle’s Optimizer Statistics, QPM, SQL Profiles, and SQL Server’s Query Store for enhanced performance. AI Integration Using Amazon DevOps Guru or Azure Monitor for RDS Developing and deploying machine learning models to identify usage trends, predict outages, anomalies, error trends, and automate maintenance. Using Data Robot / H2O.ai to integrate with databases to build ML models for forecasting query load, storage growth, etc. Data Integrity & Security Implement security best practices using AWS services like AWS KMS, IAM Roles, and Security Groups for database access control. Manage encryption for data at rest and in transit using Transparent Data Encryption (TDE) for SQL Server and Oracle. Ensure compliance with organizational and industry security standards. Backup & Recovery Design and implement automated backup strategies using AWS Backup and RDS Automated Snapshots. Develop and test disaster recovery plans. Monitoring & Alerting Configure proactive alerts for performance issues, storage limits, and availability risks using CloudWatch and AWS SNS. Collaboration & Documentation Collaborate with developers, cloud architects, and DevOps teams to integrate database changes into CI/CD pipelines. Document database architectures, configurations, backup strategies, and recovery plans. What We’re Looking For: Strong expertise in Postgres 12/13/14/15/16, SQL Server 2016/2019/2022, Oracle 12c/19c and MySQL. Experience utilizing data modeling tools like ERwin or Power Designer. Hands-on experience with Amazon RDS, EC2-hosted databases, and AWS migration tools like AWS DMS. Proficiency in writing complex SQL queries, stored procedures, PL/SQL /PGSQL and performance tuning techniques. Familiarity with cloud automation tools such as AWS CloudFormation, Terraform, Ansible, Python. Strong understanding of database security, auditing, and encryption best practices. Experience in PowerShell, T-SQL, PL/SQL, PGSQL 14+ years of experience as a Performance Tuning with Data modelling DBA managing Postgres, SQL Server Oracle & MySQL databases. Experience with AWS database migration strategies and cloud performance tuning best practices. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 313668 Posted On: 2025-07-01 Location: Hyderabad, India

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Position Summary DT-US Product Engineering - Data Scientist Manager We are seeking an exceptional Data Scientist who combines deep expertise in AI/ML with a strong focus on data quality and advanced analytics. This role requires a proven track record in developing production-grade machine learning solutions, implementing robust data quality frameworks, and leveraging cutting-edge analytical tools to drive business transformation through data-driven insights . Work you will do The Data Scientist will be responsible for developing and implementing end-to-end AI/ML solutions while ensuring data quality excellence across all stages of the data lifecycle. This role requires extensive experience in modern data science platforms, AI frameworks, and analytical tools, with a focus on scalable and production-ready implementations. Project Leadership and Management: Lead complex data science initiatives utilizing Databricks, Dataiku, and modern AI/ML frameworks for end-to-end solution development Establish and maintain data quality frameworks and metrics across all stages of model development Design and implement data validation pipelines and quality control mechanisms for both structured and unstructured data Strategic Development: Develop and deploy advanced machine learning models, including deep learning and generative AI solutions Design and implement automated data quality monitoring systems and anomaly detection frameworks Create and maintain MLOps pipelines for model deployment, monitoring, and maintenance Team Mentoring and Development: Lead and mentor a team of data scientists and analysts, fostering a culture of technical excellence and continuous learning Develop and implement training programs to enhance team capabilities in emerging technologies and methodologies Establish performance metrics and career development pathways for team members Drive knowledge sharing initiatives and best practices across the organization Provide technical guidance and code reviews to ensure high-quality deliverables Data Quality and Governance: Establish data quality standards and best practices for data collection, preprocessing, and feature engineering Implement data validation frameworks and quality checks throughout the ML pipeline Design and maintain data documentation systems and metadata management processes Lead initiatives for data quality improvement and standardization across projects Technical Implementation: Design, develop and deploy end-to-end AI/ML solutions using modern frameworks including TensorFlow, PyTorch, scikit-learn, XGBoost for machine learning, BERT and GPT for NLP, and OpenCV for computer vision applications Architect and implement robust data processing pipelines leveraging enterprise platforms like Databricks, Apache Spark, Pandas for data transformation, Dataiku and Apache Airflow for ETL/ELT processes, and DVC for data version control Establish and maintain production-grade MLOps practices including model deployment, monitoring, A/B testing, and continuous integration/deployment pipelines Technical Expertise Requirements: Must Have: Enterprise AI/ML Platforms: Demonstrate mastery of Databricks for large-scale processing, with proven ability to architect solutions at scale Programming & Analysis: Advanced Python (NumPy, Pandas, scikit-learn), SQL, PySpark with production-level expertise Machine Learning: Deep expertise in TensorFlow or PyTorch, and scikit-learn with proven implementation experience Big Data Technologies: Advanced knowledge of Apache Spark, Databricks, and distributed computing architectures Cloud Platforms: Strong experience with at least one major cloud platform (AWS/Azure/GCP) and their ML services (SageMaker/Azure ML/Vertex AI) Data Processing & Analytics: Extensive experience with enterprise-grade data processing tools and ETL pipelines MLOps & Infrastructure: Proven experience in model deployment, monitoring, and maintaining production ML systems Data Quality: Experience implementing comprehensive data quality frameworks and validation systems Version Control & Collaboration: Strong proficiency with Git, JIRA, and collaborative development practices Database Systems: Expert-level knowledge of both SQL and NoSQL databases for large-scale data management Visualization Tools: Tableau, Power BI, Plotly, Seaborn Large Language Models: Experience with GPT, BERT, LLaMA, and fine-tuning methodologies Good to Have: Additional Programming: R, Julia Additional Big Data: Hadoop, Hive, Apache Kafka Multi-Cloud: Experience across AWS, Azure, and GCP platforms Advanced Analytics: Dataiku, H2O.ai Additional MLOps: MLflow, Kubeflow, DVC (Data Version Control) Data Quality & Validation: Great Expectations, Deequ, Apache Griffin Business Intelligence: SAP HANA, SAP Business Objects, SAP BW Specialized Databases: Cassandra, MongoDB, Neo4j Container Orchestration: Kubernetes, Docker Additional Collaboration Tools: Confluence, BitBucket Education: Advanced degree in quantitative discipline (Statistics, Math, Computer Science, Engineering) or relevant experience. Qualifications: 10-13 years of experience with data mining, statistical modeling tools and underlying algorithms. 5+ years of experience with data analysis software for large scale analysis of structured and unstructured data. Proven track record of leading and delivering large-scale machine learning projects, including production model deployment, data quality framework implementation and experience with very large datasets to create data-driven insights thru predictive and prescriptive analytic models. E xtensive knowledge of supervised and unsupervised analytic modeling techniques such as linear and logistic regression, support vector machines, decision trees / random forests, Naïve-Bayesian, neural networks, association rules, text mining, and k-nearest neighbors among other clustering models. Extensive experience with deep learning frameworks, automated ML platforms, data processing tools (Databricks Delta Lake, Apache Spark), analytics platforms (Tableau, Power BI), and major cloud providers (AWS, Azure, GCP) Experience architecting and implementing enterprise-grade solutions using cloud-native ML services while ensuring cost optimization and performance efficiency Strong track record of team leadership, stakeholder management, and driving technical excellence across multiple concurrent projects Expert-level proficiency in Python, R, and SQL, with deep understanding of statistical analysis, hypothesis testing, feature engineering, model evaluation, and validation techniques in production environments Demonstrated leadership experience in implementing MLOps practices, including model monitoring, A/B testing frameworks, and maintaining production ML systems at scale. Working knowledge of supervised and unsupervised learning techniques, such as Regression/Generalized Linear Models, decision tree analysis, boosting and bagging, Principal Components Analysis, and clustering methods. Strong oral and written communication skills, including presentation skills The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) team is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. Work Location: Hyderabad Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303069 Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role – AIML Data Scientist Location : Kochi Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary: We are seeking a highly skilled and experienced Data Scientist with a deep understanding of data analytics powered by artificial intelligence (AI) tools. The ideal candidate will be passionate about turning data into actionable insights using cutting-edge AI platforms, automation techniques, and advanced statistical methods. Key Responsibilities: Develop and deploy scalable AI-powered data analytics solutions for business intelligence, forecasting, and optimization. Leverage AI tools to automate data cleansing, feature engineering, model building, and visualization. Design and conduct advanced statistical analyses and machine learning models (supervised, unsupervised, NLP, etc.). Collaborate cross-functionally with engineering and business teams to drive data-first decision-making. Must-Have Skills & Qualifications: Minimum 4 years of professional experience in data science, analytics, or a related field. Proficiency in Python and/or R with strong hands-on experience in ML libraries (scikit-learn, XGBoost, TensorFlow, etc.). Expert knowledge of SQL and working with relational databases. Proven experience with data wrangling, data pipelines, and ETL processes. Deep Understanding of AI Tools for Data Analytics (Experience with several of the following is required): Data Preparation & Automation: Alteryx, Trifacta, KNIME AI/ML Platforms: DataRobot, H2O.ai, Amazon SageMaker, Azure ML Studio, Google Vertex AI Visualization & BI: Tableau, Power BI, Looker (with AI/ML integrations) AutoML & Predictive Modeling: Google AutoML, IBM Watson Studio, BigML NLP & Text Analytics: OpenAI (ChatGPT, Codex APIs), Hugging Face Transformers, MonkeyLearn Workflow Orchestration: Apache Airflow, Prefect Preferred Qualifications: Degree in Computer Science, Data Science, Statistics, or related field. Experience in cloud-based environments (AWS, GCP, Azure) for ML workloads. To apply, please send your resume to sooraj@superpe.in or shreya@superpe.in SuperPe is an equal opportunity employer and welcomes candidates of all backgrounds to apply. We look forward to hearing from you! Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

MoAt CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, tar gets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Associate Data Scientist Location: Bangalore Business & Team: Home Buying Decision Science Impact & contribution: The Senior Associate Data Scientist will use technical knowledge and understanding of business domain to deliver moderate or highly complex data science projects independently or with minimal guidance. You will also engage and collaborate with business stakeholders to clearly articulate findings to solve business problems. Roles & Responsibilities: Analyse complex data sets to extract insights and identify trends. Develop predictive models and algorithms to solve business problems. Work on deployment of models in production. Collaborate with cross-functional teams to understand requirements and deliver data-driven solutions. Clean, preprocess, and manipulate data for analysis through programming. Communicate findings and recommendations to stakeholders through reports and presentations. Stay updated with industry trends and best practices in data science. Contribute to the development and improvement of data infrastructure and processes. Design experiments and statistical analysis to validate hypotheses and improve models. Continuously learn and enhance skills in data science techniques and tools. Strongly support the adoption of data science across the organization. Identify problems in the products, services and operations of the bank and solve those with innovative research driven solutions. Essential Skills: Strong hands-on programming experience in Python (mandatory), R, SQL, Hive and Spark. More than 3 years of relevant experience. Ability to write well designed, modular and optimized code. Knowledge of H2O.ai, GitHub, Big Data and ML Engineering. Knowledge of commonly used data structures and algorithms. Good to have: Knowledge of Time Series, NLP and Deep Learning and Generative AI is preferred. Good to have: Knowledge and hands-on experience in developing solutions with Large Language Models. Must have been part of projects building and deploying predictive models in production (financial services domain preferred) involving large and complex data sets. Strong problem solving and critical thinking skills. Curious, fast learning capability and team player attitude is a must. Ability to communicate clearly and effectively. Demonstrated expertise through blogposts, research, participation in competitions, speaking opportunities, patents and paper publications. Most importantly - ability to identify and translate theories into real applications to solve practical problems. Preferred Skills: Good to have: Knowledge and hands-on data engineering or model deployment Experience in Data Science in either of Credit Risk, Pricing Modelling and Monitoring, Sales and Marketing, Campaign Analytics, Ecommerce Retail or banking products for retail or business banking is preferred. Solid foundation of Statistics and core ML algorithms at a mathematical (under the hood) level. Education Qualifications : Bachelor’s degree in Engineering in Computer Science/Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 25/06/2025 Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru

On-site

MoAt CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, tar gets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Associate Data Scientist Location: Bangalore Business & Team: Home Buying Decision Science Impact & contribution: The Senior Associate Data Scientist will use technical knowledge and understanding of business domain to deliver moderate or highly complex data science projects independently or with minimal guidance. You will also engage and collaborate with business stakeholders to clearly articulate findings to solve business problems. Roles & Responsibilities: Analyse complex data sets to extract insights and identify trends. Develop predictive models and algorithms to solve business problems. Work on deployment of models in production. Collaborate with cross-functional teams to understand requirements and deliver data-driven solutions. Clean, preprocess, and manipulate data for analysis through programming. Communicate findings and recommendations to stakeholders through reports and presentations. Stay updated with industry trends and best practices in data science. Contribute to the development and improvement of data infrastructure and processes. Design experiments and statistical analysis to validate hypotheses and improve models. Continuously learn and enhance skills in data science techniques and tools. Strongly support the adoption of data science across the organization. Identify problems in the products, services and operations of the bank and solve those with innovative research driven solutions. Essential Skills: Strong hands-on programming experience in Python (mandatory), R, SQL, Hive and Spark. More than 3 years of relevant experience. Ability to write well designed, modular and optimized code. Knowledge of H2O.ai, GitHub, Big Data and ML Engineering. Knowledge of commonly used data structures and algorithms. Good to have: Knowledge of Time Series, NLP and Deep Learning and Generative AI is preferred. Good to have: Knowledge and hands-on experience in developing solutions with Large Language Models. Must have been part of projects building and deploying predictive models in production (financial services domain preferred) involving large and complex data sets. Strong problem solving and critical thinking skills. Curious, fast learning capability and team player attitude is a must. Ability to communicate clearly and effectively. Demonstrated expertise through blogposts, research, participation in competitions, speaking opportunities, patents and paper publications. Most importantly - ability to identify and translate theories into real applications to solve practical problems. Preferred Skills: Good to have: Knowledge and hands-on data engineering or model deployment Experience in Data Science in either of Credit Risk, Pricing Modelling and Monitoring, Sales and Marketing, Campaign Analytics, Ecommerce Retail or banking products for retail or business banking is preferred. Solid foundation of Statistics and core ML algorithms at a mathematical (under the hood) level. Education Qualifications : Bachelor’s degree in Engineering in Computer Science/Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 25/06/2025

Posted 1 month ago

Apply

5.0 years

2 - 2 Lacs

Bengaluru

On-site

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Data Scientist Location : Bangalore Business & Team: RBS Decision Science Impact & contribution: The Data Scientist will use technical knowledge and understanding of business domain to own and deliver moderate or highly complex data science projects independently or with minimal guidance. You also will engage and collaborate with business stakeholders to clearly articulate findings to solve business problems. Roles & Responsibilities: Lead data-driven initiatives, from problem formulation to model deployment, leveraging advanced statistical techniques and machine learning algorithms. Drive the development and implementation of scalable data solutions, ensuring accuracy and reliability of predictive models. Collaborate with business stakeholders to define project goals, prioritize tasks, and deliver actionable insights. Design and execute experiments to evaluate model performance and optimize algorithms for maximum efficiency. Develop and deploy production-grade machine learning models in cloud-based and on-prem platforms. Lead cross-functional teams in the design and execution of data science projects, ensuring alignment with business objectives. Stay abreast of emerging technologies and industry trends, continuously enhancing expertise in data science methodologies and tools. Drive innovation by exploring new approaches and techniques for solving complex business problems through data analysis and modelling. Mentor junior team members, providing guidance on best practices and technical skills development. Strongly support the adoption of data science across the organization. Identify problems in the products, services and operations of the bank and solve those with innovative research driven solutions. Essential Skills: Strong hands-on programming experience in Python (mandatory), R, SQL, Hive and Spark. 5+ years of experience in above skills Ability to write well designed, modular and optimized code. Knowledge of H2O.ai, GitHub, Big Data and ML Engineering. Knowledge of Snowflake, AWS, Azure etc. Knowledge of commonly used data structures and algorithms. Solid foundation of Statistics and core ML algorithms at a mathematical (under the hood) level. Must have been part of projects building and deploying predictive models in production (financial services domain preferred) involving large and complex data sets. Experience in Data Science in Pricing, Credit Risk, Marketing, Campaign Analytics, Ecommerce Retail or banking products for retail or business banking is preferred. Good to have: Knowledge of Time Series, NLP and Deep Learning and Generative AI is preferred. Good to have: Knowledge and hands-on experience in developing solutions with Large Language Models. Good to have: familiarity with agentic coding such as Roo code and Cline Built and deployed large scale software applications. Understanding of principles of software engineering and cloud computing. Strong problem solving and critical thinking skills. Curious, fast learning capability and team player attitude is a must. Ability to communicate clearly and effectively. Demonstrated expertise through blogposts, research, participation in competitions, speaking opportunities, patents and paper publications. Most importantly - ability to identify and translate theories into real applications to solve practical problems. Education Qualifications: Bachelor’s degree in Engineering Or Master’s degree Or Ph.D. in Data Science/ Machine Learning/ Computer Science/ Computational Linguistics/ Statistics/ Mathematics/Engineering. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 02/07/2025

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Data Scientist Location : Bangalore Business & Team: RBS Decision Science Impact & contribution: The Data Scientist will use technical knowledge and understanding of business domain to own and deliver moderate or highly complex data science projects independently or with minimal guidance. You also will engage and collaborate with business stakeholders to clearly articulate findings to solve business problems. Roles & Responsibilities: Lead data-driven initiatives, from problem formulation to model deployment, leveraging advanced statistical techniques and machine learning algorithms. Drive the development and implementation of scalable data solutions, ensuring accuracy and reliability of predictive models. Collaborate with business stakeholders to define project goals, prioritize tasks, and deliver actionable insights. Design and execute experiments to evaluate model performance and optimize algorithms for maximum efficiency. Develop and deploy production-grade machine learning models in cloud-based and on-prem platforms. Lead cross-functional teams in the design and execution of data science projects, ensuring alignment with business objectives. Stay abreast of emerging technologies and industry trends, continuously enhancing expertise in data science methodologies and tools. Drive innovation by exploring new approaches and techniques for solving complex business problems through data analysis and modelling. Mentor junior team members, providing guidance on best practices and technical skills development. Strongly support the adoption of data science across the organization. Identify problems in the products, services and operations of the bank and solve those with innovative research driven solutions. Essential Skills: Strong hands-on programming experience in Python (mandatory), R, SQL, Hive and Spark. 5+ years of experience in above skills Ability to write well designed, modular and optimized code. Knowledge of H2O.ai, GitHub, Big Data and ML Engineering. Knowledge of Snowflake, AWS, Azure etc. Knowledge of commonly used data structures and algorithms. Solid foundation of Statistics and core ML algorithms at a mathematical (under the hood) level. Must have been part of projects building and deploying predictive models in production (financial services domain preferred) involving large and complex data sets. Experience in Data Science in Pricing, Credit Risk, Marketing, Campaign Analytics, Ecommerce Retail or banking products for retail or business banking is preferred. Good to have: Knowledge of Time Series, NLP and Deep Learning and Generative AI is preferred. Good to have: Knowledge and hands-on experience in developing solutions with Large Language Models. Good to have: familiarity with agentic coding such as Roo code and Cline Built and deployed large scale software applications. Understanding of principles of software engineering and cloud computing. Strong problem solving and critical thinking skills. Curious, fast learning capability and team player attitude is a must. Ability to communicate clearly and effectively. Demonstrated expertise through blogposts, research, participation in competitions, speaking opportunities, patents and paper publications. Most importantly - ability to identify and translate theories into real applications to solve practical problems. Education Qualifications: Bachelor’s degree in Engineering Or Master’s degree Or Ph.D. in Data Science/ Machine Learning/ Computer Science/ Computational Linguistics/ Statistics/ Mathematics/Engineering. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 02/07/2025 Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role – Deep Learning Engineer & Data Scientist Job Description Location - PAN India Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

Role Description Job Title: Lead ML-Ops Engineer – GenAI & Scalable ML Systems Location: Any UST Job Type: Full-Time Experience Level: Senior / Lead Role Overview We are seeking a Lead ML-Ops Engineer to spearhead the end-to-end operationalization of machine learning and Generative AI models across our platforms. You will play a pivotal role in building robust, scalable ML pipelines, embedding responsible AI governance, and integrating innovative GenAI techniques—such as Retrieval-Augmented Generation (RAG) and LLM-based applications —into real-world systems. You will collaborate with cross-functional teams of data scientists, data engineers, product managers, and business stakeholders to ensure AI solutions are production-ready, resilient, and aligned with strategic business goals. A strong background in Dataiku or similar platforms is highly preferred. Key Responsibilities Model Development & Deployment Design, implement, and manage scalable ML pipelines using CI/CD practices. Operationalize ML and GenAI models, ensuring high availability, observability, and reliability. Automate data and model validation, versioning, and monitoring processes. Technical Leadership & Mentorship Act as a thought leader and mentor to junior engineers and data scientists on ML-Ops best practices. Define architecture standards and promote engineering excellence across ML-Ops workflows. Innovation & Generative AI Strategy Lead the integration of GenAI capabilities such as RAG and large language models (LLMs) into applications. Identify opportunities to drive business impact through cutting-edge AI technologies and frameworks. Governance & Compliance Implement governance frameworks for model explainability, bias detection, reproducibility, and auditability. Ensure compliance with data privacy, security, and regulatory standards in all ML/AI solutions. Must-Have Skills 5+ years of experience in ML-Ops, Data Engineering, or Machine Learning. Proficiency in Python, Docker, Kubernetes, and cloud services (AWS/GCP/Azure). Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins, MLflow, or Kubeflow). Deep knowledge of ML pipeline orchestration, model lifecycle management, and monitoring tools. Experience with LLM frameworks (e.g., LangChain, HuggingFace Transformers) and GenAI use cases like RAG. Strong understanding of responsible AI and MLOps governance best practices. Proven ability to work cross-functionally and lead technical discussions. Good-to-Have Skills Experience with Dataiku DSS or similar platforms (e.g., DataRobot, H2O.ai). Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) for GenAI retrieval tasks. Exposure to tools like Apache Airflow, Argo Workflows, or Prefect for orchestration. Understanding of ML evaluation metrics in a production context (drift detection, data integrity checks). Experience in mentoring, technical leadership, or project ownership roles. Why Join Us? Be at the forefront of AI innovation and shape how cutting-edge technologies drive business transformation. Join a collaborative, forward-thinking team with a strong emphasis on impact, ownership, and learning. Competitive compensation, remote flexibility, and opportunities for career advancement. Skills Artificial Intelligence,Python,ML-OPS Show more Show less

Posted 1 month ago

Apply

12.0 years

0 Lacs

Mysuru, Karnataka, India

On-site

About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description We are seeking a highly skilled, results-oriented Product Manager - AI & BI to lead the growth and development of iSOCRATES' MADTECH.AI™ platform. As a core member of the product team, you will play an instrumental role in shaping the future of our AI-powered Marketing, Advertising, and Data Decision Intelligence solutions. Your focus will be on driving innovation in AI and BI capabilities, ensuring that our product meets the evolving needs of our B2B customers and enhances their marketing and data analytics capabilities. Key Responsibilities Product Strategy & Roadmap Development: Lead the creation and execution of the MADTECH.AI™ product roadmap, with a focus on incorporating AI and BI technologies to deliver value for B2B customers. Collaborate with internal stakeholders to define product features, prioritize enhancements, and ensure alignment with iSOCRATES’ long-term business objectives. AI & BI Product Development: Spearhead the design and development of innovative AI and BI features to enhance the MADTECH.AI™ platform’s scalability, functionality, and user experience. Leverage cutting-edge technologies such as machine learning, predictive analytics, natural language processing (NLP), data visualization, reinforcement learning, and other advanced AI techniques to deliver powerful marketing, advertising, and data decision intelligence solutions. Cross-Functional Collaboration: Collaborate with cross-functional teams, including engineering, design, marketing, sales, and customer success, to ensure seamless product development and delivery. Facilitate communication between technical and business teams to ensure product features align with customer needs and market trends. Customer & Market Insights: Engage with customers and other stakeholders to gather feedback, identify pain points, and stay on top of market trends. Use this data to shape product development and enhance MADTECH.AI™ capabilities, ensuring they are well-positioned in the evolving market landscape. Product Lifecycle Management: Oversee the complete product lifecycle from ideation through launch and beyond. Manage ongoing iterations of the product based on customer feedback and performance metrics to ensure that MADTECH.AI™ remains competitive and meets user expectations. Data-Driven Decision Making: Use customer analytics, usage patterns, and performance data to inform key product decisions. Define success metrics, monitor product performance, and make adjustments as needed to drive product success. AI/BI Thought Leadership: Stay current on the latest trends in AI, BI, MarTech, and AdTech. Act as a thought leader both internally and externally to position iSOCRATES as an innovator in the MADTECH.AI space. Promote best practices and contribute to the company’s overall strategy for AI and BI product development. Qualifications & Skills Bachelor's or Master's degree in Computer Science, Engineering, Data Science, Business, or a related field. At least 12 years of experience in product management, with a minimum of 7 years focused on B2B SaaS solutions and strong expertise in AI and BI technologies. Prior experience in marketing, advertising, or data analytics platforms is highly preferred. AI & BI Expertise: Deep understanding of Artificial Intelligence, Machine Learning, Natural Language Processing (NLP), Predictive Analytics, Data Visualization, Business Intelligence tools (e.g., Tableau, Power BI, Qlik), and their application in SaaS products, especially within the context of MarTech, AdTech, or DataTech. AI Tools and Technologies : Hands-on experience with AI and BI tools such as: Data Science Libraries: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost, LightGBM, Hugging Face Transformers, CatBoost, H2O.ai BI Platforms: Tableau, Power BI, Qlik, Looker, Domo, Sisense, MicroStrategy Machine Learning Tools: Azure ML, Google AI Platform, AWS Sagemaker, Databricks, H2O.ai, Vertex AI Data Analytics Tools: Apache Hadoop, Apache Spark, Apache Flink, SQL-based tools, dbt, Snowflake Data Visualization Tools: D3.js, Plotly, Matplotlib, Seaborn, Chart.js, Superset Cloud-Based AI Services: Google AI, AWS AI/ML services, IBM Watson, Microsoft Azure Cognitive Services, Oracle Cloud AI Emerging Tools: AutoML platforms, MLOps tools, Explainable AI (XAI) tools Product Development: Proven experience in leading AI/BI-driven product development within SaaS platforms, including managing the full product lifecycle, from ideation to launch and post-launch iterations. Agile Methodology: Experience working in Agile product development environments, with the ability to prioritize and manage multiple initiatives and product features simultaneously. Analytical & Data-Driven: Strong analytical skills with a focus on leveraging data, performance metrics, and customer feedback to inform product decisions. Ability to translate complex data into actionable insights. Customer-Centric: Experience in working directly with customers to understand their needs, pain points, and feedback. A customer-first mindset with a focus on building products that provide measurable value. Excellent Communication Skills: Exceptional communication, presentation, and interpersonal skills, with the ability to engage and influence both technical teams and business stakeholders across different geographies. Industry Knowledge: Familiarity with MADTECH.AI platforms and technologies. Understanding of customer journey analytics, predictive analytics, and decision intelligence platforms like MADTECH.AI™ is a plus. Cloud & SaaS Architecture: Familiarity with cloud-based solutions and large-scale SaaS architecture. Understanding of how AI and BI features integrate with cloud infrastructure is beneficial. Experience with AI-powered decision intelligence platforms like MADTECH.AI™ or similar MarTech, AdTech, or DataTech tools. In-depth knowledge of cloud technologies, including AWS, Azure, or Google Cloud, and their integration with SaaS platforms. Exposure to customer journey analytics, predictive analytics, and other advanced AI/BI tools. Willingness to work from Mysore/Bangalore or travel to Mysore as per business requirement. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Jd Full Stack developer Microservices and IBM Watson or Data Robot or H2O.ai or TensorFlow or PyTorch Design and implement microservices using Java, Node/Angular JS, Spring Boot framework and MyBatis Work on IBM Watson or Data Robot or H2O.ai or TensorFlow or PyTorch and HighChart, and Database integration, Work on application Performance tuning. Coordinate with data modeling and visualization team as required. Work on changes triggered by business requirement updates. Work in compliance with the full Software Development Life Cycle (SDLC) and best practices. Write and execute test cases to verify design requirements are met. Support Agile development activities for approved stories in Jira. Work with Git repository and Git actions for build and deployment Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Role – AIML Data Scientist Location : Coimbatore Mode of Interview - In Person Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role – AIML Data Scientist Location : Coimbatore Mode of Interview - In Person Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai

On-site

Role – AIML Data Scientist Job Location : Hyderabad Mode of Interview - Virtual Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role – AIML Data Scientist Job Location : Hyderabad Mode of Interview - Virtual Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 2 months ago

Apply

6.0 - 10.0 years

11 - 21 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities Full Stack developer Microservices. Design and implement microservices using Java, Node/Angular JS, Spring Boot framework and MyBatis. Coordinate with data modeling and visualization team as required. Work on changes triggered by business requirement updates. Work in compliance with the full Software Development Life Cycle (SDLC) and best practices. Write and execute test cases to verify design requirements are met. Support Agile development activities for approved stories in Jira. Work with Git repository and Git actions for build and deployment. Preferred candidate profile Skill Set: Java,Springboot,Microservices,Angular, Any AI tools: IBM Watson/ DataRobot/ H2O.ai /TensorFlow / PyTorch Location: Chennai,Bangalore,Hyderabad,Pune Notice Period: 0 - 90 Days Experience Level: 6 - 10 Years

Posted 2 months ago

Apply

7.0 - 12.0 years

18 - 20 Lacs

Hyderabad

Work from Office

We are Hiring Senior Python with Machine Learning Engineer Level 3 for a US based IT Company based in Hyderabad. Candidates with minimum 7 Years of experience in python and machine learning can apply. Job Title : Senior Python with Machine Learning Engineer Level 3 Location : Hyderabad Experience : 7+ Years CTC : 28 LPA - 30 LPA Working shift : Day shift Job Description: We are seeking a highly skilled and experienced Python Developer with a strong background in Machine Learning (ML) to join our advanced analytics team. In this Level 3 role, you will be responsible for designing, building, and deploying robust ML pipelines and solutions across real-time, batch, event-driven, and edge computing environments. The ideal candidate will have extensive hands-on experience in developing and deploying ML workflows using AWS SageMaker , building scalable APIs, and integrating ML models into production systems. This role also requires a strong grasp of the complete ML lifecycle and DevOps practices specific to ML projects. Key Responsibilities: Develop and deploy end-to-end ML pipelines for real-time, batch, event-triggered, and edge environments using Python Utilize AWS SageMaker to build, train, deploy, and monitor ML models using SageMaker Pipelines, MLflow, and Feature Store Build and maintain RESTful APIs for ML model serving using FastAPI , Flask , or Django Work with popular ML frameworks and tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Ensure best practices across the ML lifecycle: data preprocessing, model training, validation, deployment, and monitoring Implement CI/CD pipelines tailored for ML workflows using tools like Bitbucket , Jenkins , Nexus , and AUTOSYS Design and maintain ETL workflows for ML pipelines using PySpark , Kafka , AWS EMR , and serverless architectures Collaborate with cross-functional teams to align ML solutions with business objectives and deliver impactful results Required Skills & Experience: 5+ years of hands-on experience with Python for scripting and ML workflow development 4+ years of experience with AWS SageMaker for deploying ML models and pipelines 3+ years of API development experience using FastAPI , Flask , or Django 3+ years of experience with ML tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Strong understanding of the complete ML lifecycle: from model development to production monitoring Experience implementing CI/CD for ML using Bitbucket , Jenkins , Nexus , and AUTOSYS Proficient in building ETL processes for ML workflows using PySpark , Kafka , and AWS EMR Nice to Have: Experience with H2O.ai for advanced machine learning capabilities Familiarity with containerization using Docker and orchestration using Kubernetes For further assistance contact/whatsapp : 9354909517 or write to hema@gist.org.in

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description - External Role – AIML Data Scientist Location : Kochi Mode of Interview - In Person Date : 14th June 2025 (Saturday) Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

India

On-site

Flexera saves customers billions of dollars in wasted technology spend. A pioneer in Hybrid ITAM and FinOps, Flexera provides award-winning, data-oriented SaaS solutions for technology value optimization (TVO), enabling IT, finance, procurement and cloud teams to gain deep insights into cost optimization, compliance and risks for each business service. Flexera One solutions are built on a set of definitive customer, supplier and industry data, powered by our Technology Intelligence Platform, that enables organizations to visualize their Enterprise Technology Blueprint™ in hybrid environments—from on-premises to SaaS to containers to cloud. We’re transforming the software industry. We’re Flexera. With more than 50,000 customers across the world, we’re achieving that goal. But we know we can’t do any of that without our team. Ready to help us re-imagine the industry during a time of substantial growth and ambitious plans? Come and see why we’re consistently recognized by Gartner, Forrester and IDC as a category leader in the marketplace. Learn more at flexera.com Job Summary: We are seeking a skilled and motivated Senior Data Engineer to join our Automation, AI/ML team. In this role, you will work on designing, building, and maintaining data pipelines and infrastructure to support AI/ML initiatives, while contributing to the automation of key processes. This position requires expertise in data engineering, cloud technologies, and database systems, with a strong emphasis on scalability, performance, and innovation. Key Responsibilities: Identify and automate manual processes to improve efficiency and reduce operational overhead. Design, develop, and optimize scalable data pipelines to integrate data from multiple sources, including Oracle and SQL Server databases. Collaborate with data scientists and AI/ML engineers to ensure efficient access to high-quality data for training and inference models. Implement automation solutions for data ingestion, processing, and integration using modern tools and frameworks. Monitor, troubleshoot, and enhance data workflows to ensure performance, reliability, and scalability. Apply advanced data transformation techniques, including ETL/ELT processes, to prepare data for AI/ML use cases. Develop solutions to optimize storage and compute costs while ensuring data security and compliance. Required Skills and Qualifications: Experience in identifying, streamlining, and automating repetitive or manual processes. Proven experience as a Data Engineer, working with large-scale database systems (e.g., Oracle, SQL Server) and cloud platforms (AWS, Azure, Google Cloud). Expertise in building and maintaining data pipelines using tools like Apache Airflow, Talend, or Azure Data Factory. Strong programming skills in Python, Scala, or Java for data processing and automation tasks. Experience with data warehousing technologies such as Snowflake, Redshift, or Azure Synapse. Proficiency in SQL for data extraction, transformation, and analysis. Familiarity with tools such as Databricks, MLflow, or H2O.ai for integrating data engineering with AI/ML workflows. Experience with DevOps practices and tools, such as Jenkins, GitLab CI/CD, Docker, and Kubernetes. Knowledge of AI/ML concepts and their integration into data workflows. Strong problem-solving skills and attention to detail. Preferred Qualifications: Knowledge of security best practices, including data encryption and access control. Familiarity with big data technologies like Hadoop, Spark, or Kafka. Exposure to Databricks for data engineering and advanced analytics workflows. Flexera is proud to be an equal opportunity employer. Qualified applicants will be considered for open roles regardless of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by local/national laws, policies and/or regulations. Flexera understands the value that results from employing a diverse, equitable, and inclusive workforce. We recognize that equity necessitates acknowledging past exclusion and that inclusion requires intentional effort. Our DEI (Diversity, Equity, and Inclusion) council is the driving force behind our commitment to championing policies and practices that foster a welcoming environment for all. We encourage candidates requiring accommodations to please let us know by emailing careers@flexera.com. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role – AIML Data Scientist Location : Chennai Mode of Interview - In Person Date : 7th June 2025 (Saturday) Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 2 months ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies