Jobs
Interviews

3040 Clustering Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities We are seeking for a passionate and skilled Python AI Engineer to design, develop and maintain hybrid AI Platform across multi-cloud and on-premises. Build an AI platform that enables real time machine learning and GenAI at scale along with governance and security frameworks. You will collaborate with data engineer, product managers, and software engineers to bring AI-driven products and features to life. Job Description Work with frameworks like TensorFlow/PyTorch, Scikit-learn, or similar. Design and implement AI/ML models and algorithms using Python3. Develop and maintain scalable, production-grade machine learning pipelines. Conduct data exploration, preprocessing, feature engineering, and model evaluation. Optimize models for performance and scalability in production environments. Collaborate with cross-functional teams to integrate AI components into real-world applications. Stay up to date with the latest research and industry trends in AI and machine learning. Document experiments, code, and processes for reproducibility and transparency. Preferred Education Master's Degree Required Technical And Professional Expertise Strong programming skills in Python 3. Solid understanding of machine learning fundamentals (classification, regression, clustering, etc.). Experience with ML libraries and frameworks (e.g., Scikit-learn, TensorFlow/PyTorch, XGBoost, etc.). Experience in NLP related to Semantic models/Search using BERT/ transformer model. Experience with Gen AI ecosystem/tools is a plus. Experience in data wrangling using Pandas/Polaris, NumPy, SQL, etc. Good grasp of software engineering principles (version control, testing, modular code). Preferred Technical And Professional Experience Familiarity with REST APIs and deployment practices (Dockerized Container, Flask/FastAPI, etc.). Understanding of cloud platforms (AWS, GCP, Azure) is a plus. Problem-Solving: Excellent analytical and problem-solving skills, with the ability to think critically and creatively. Communication: Strong interpersonal and communication skills, with the ability to work effectively in a collaborative team environment

Posted 3 days ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Mumbai

Work from Office

Spinebiz Services is looking for AWS SysOps Professional to join our dynamic team and embark on a rewarding career journey Designing and implementing AWS solutions, including infrastructure and application architecture, to meet business requirements Deploying, configuring, and managing AWS services, including EC2, S3, RDS, and VPC Automating the deployment and scaling of AWS resources using cloud formation, script, and other tools Monitoring and optimizing AWS resources to ensure high availability and cost efficiency Troubleshooting and resolving technical issues related to AWS services Collaborating with cross-functional teams, including developers, project managers, and security specialists, to ensure that AWS solutions are delivered on time and to the highest quality standards Excellent problem-solving and critical thinking skills Excellent communication and collaboration skills, including the ability to work effectively with cross-functional teams Ability to manage time and prioritize tasks effectively, and to deliver AWS solutions on time and to the highest quality standards

Posted 3 days ago

Apply

6.0 - 11.0 years

9 - 13 Lacs

Bengaluru

Work from Office

We are looking for a Senior Cloud Operations DBA to manage, optimize, and ensure the reliability of cloud-based databases in a 24/7 production environment. The ideal candidate will have strong experience in AWS RDS, PostgreSQL, MySQL, and NoSQL databases, with a focus on performance tuning, high availability, backup strategies, and disaster recovery. Key Responsibilities Manage, monitor, and maintain cloud-based databases for high availability, security, and performance. Analyze and optimize database queries, indexing, and configuration for better efficiency. Implement and maintain robust backup and disaster recovery strategies. Automate repetitive database operations using scripting languages (Python, Shell, SQL). Ensure database compliance with ISO 27001, SOC 2, and other security/audit requirements. Troubleshoot and resolve issues, collaborating with CloudOps, DevOps, and Engineering teams. Plan and execute database upgrades, schema migrations, and replication strategies. Set up and manage proactive monitoring using Grafana, Prometheus, CloudWatch, and Splunk. Requirements 6+ years of experience in database administration, with a strong cloud-based background. Hands-on experience with AWS RDS (PostgreSQL, MySQL, DynamoDB). Proficient in SQL performance tuning, indexing, and debugging. Experience with Infrastructure as Code (IaC) tools like Terraform for database management. Strong scripting skills in Python, Bash, or PowerShell. Willingness to work APAC and EMEA shifts with on-call rotation. Expertise in high availability, clustering, and replication technologies. Understanding of cloud networking, IAM roles, and security best practices. Excellent troubleshooting and problem-solving abilities in cloud environments. Preferred Skills Experience with NoSQL databases (MongoDB, DynamoDB, Cassandra). Exposure to Kubernetes, containerization, and serverless architecture. Experience integrating databases with CI/CD pipelines and DevOps workflows. Knowledge of observability tools for database monitoring and proactive alerting. Cloud cost optimization and advanced performance tuning skills. Familiarity with incident response and resolution best practices.

Posted 3 days ago

Apply

7.0 years

0 Lacs

India

Remote

Job Title: Remote ML/AI Developer Location: Remote / Hybrid (Preferred regions: EMEA, USA, UK, Japan) Experience: 2–7 years Type: Full-time / Contract Your Role As an ML/AI Developer , you will design, develop, and deploy machine learning models that solve real-world problems across industries. You’ll collaborate with product teams, data engineers, and backend developers to build smart, scalable solutions—ranging from predictive analytics and recommendation systems to generative AI and natural language processing. Responsibilities Build, train, and optimize machine learning and deep learning models Work with structured and unstructured data for classification, regression, clustering, or NLP tasks Develop APIs and pipelines to deploy models into production environments Collaborate with data engineers to ensure clean, scalable, and usable datasets Conduct model evaluation, tuning, and experimentation Translate business problems into technical ML/AI solutions Stay updated with the latest advancements in AI/ML frameworks and tools Document experiments, model results, and deployment processes Tech Skills We Value Strong knowledge of Python and ML libraries such as scikit-learn , TensorFlow , Keras , PyTorch , or XGBoost Experience with NLP , computer vision , recommendation systems , or generative AI models Familiarity with data preprocessing , feature engineering , and model evaluation techniques Experience with SQL , Pandas , NumPy , and data manipulation tools Exposure to ML Ops , Docker , REST APIs , and CI/CD for model deployment Familiarity with AWS Sagemaker , Google Vertex AI , or Azure ML is a plus Understanding of data privacy, fairness, and ethical AI practices What We’re Looking For 2–5 years of experience in AI/ML development or applied data science A strong foundation in statistics, machine learning theory, and model development Proven experience building and deploying end-to-end ML solutions Excellent problem-solving and analytical skills Ability to work independently and in distributed remote teams Passion for learning and applying new AI technologies What You’ll Get Work on cutting-edge AI projects with innovative startups and global clients 100% remote flexibility with freelance or long-term contract options Access to real-world problems, high-quality datasets, and modern tech stacks Join a collaborative global network of AI/ML engineers and data professionals Growth opportunities in ML Ops, AI product development, and tech leadership

Posted 3 days ago

Apply

5.0 years

15 - 18 Lacs

Hyderābād

On-site

#Connections #Hiring #SeniorDataScientist #Hyderabad #Experience Hi Connections, We are hiring senior Data Scientist for our client. Role: Sr. Data Scientist (Predictive Analytics Focus & Data bricks) Experience: 5 Years Location: Hyderabad Responsibilities: Design and deploy predictive models (e.g., forecasting, churn analysis, fraud detection) using Python/SQL, Spark MLlib, and Databricks ML. Build end-to-end ML pipelines (data ingestion → feature engineering → model training → deployment) on Databricks Lakehouse. Optimize model performance via hyperparameter tuning, AutoML, and MLflow tracking. Collaborate with engineering teams to operationalize models (batch/real-time) using Databricks Jobs or REST APIs. Implement Delta Lake for scalable, ACID-compliant data workflows. Enable CI/CD for ML pipelines using Databricks Repos and GitHub Actions. Troubleshoot issues in Spark Jobs and Databricks Environment. Requirements: Experience should have 3 to 5 years in predictive analytics, with expertise in regression, classification, time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark. Familiarity with MLflow, Feature Store, and Unity Catalog for governance. Industry experience in Life Insurance or P&C. Good to have certification on Databricks Certified ML Practitioner. Technical Skills: Python, PySpark, MLflow, Databricks AutoML. Predictive Modelling (Classification, Clustering, Regression, timeseries and NLP). Cloud platform (Azure/AWS), Delta Lake, Unity Catalog. Interested guys, kindly share your updated profile to pavani@sandvcapitals.com or reach us on 7995292089. Thank you. Job Type: Full-time Pay: ₹1,500,000.00 - ₹1,800,000.00 per year Experience: Data Scientist: 4 years (Required) Work Location: In person

Posted 3 days ago

Apply

10.0 - 12.0 years

9 - 10 Lacs

Hyderābād

On-site

Overview: PepsiCo Data BI & Integration Platforms is seeking an experienced highly skilled professional for managing and optimizing Apache and Oracle WebLogic server environments (on-premises and AWS/Azure cloud) ensuring high availability, performance, and security of PepsiCo’s Global enterprise applications. The ideal candidate will have extensive hands-on experience and deep expertise in Apache and Oracle WebLogic administration, troubleshooting, and advanced configuration; deep hands-on experience with cloud Infrastructure as Code (IaC), cloud network design, cloud security principles, cloud modernization and automation. Responsibilities: Leadership and Guidance Manage and mentor a team of cloud platform infrastructure SMEs, providing technical leadership and direction. Modernization Migration and modernization of Apache/WebLogic to Azure/AWS Patching and Upgrades Troubleshooting and Problem Resolution Identifying and resolving system and application issues, including performance degradation, connectivity problems, and security breaches. Participating in project planning and change management, including root cause analysis for issues. On-Call Support: Providing on-call support for production environments. Documentation Creating and maintaining documentation of configuration changes, system processes, and troubleshooting procedures. Collaboration Working closely with development, operations, and other teams to support application lifecycle management and ensure smooth operation. High Availability, Business Continuity and Disaster Recovery Configuring and maintaining high availability and disaster recovery solutions, including clustering and failover mechanisms & testing. Apache/WebLogic Installation and Configuration WebLogic – Installation, configuration, and maintenance of WebLogic Server instances, including domains, clusters, and authentication providers. WebLogic – Integrating WebLogic with other systems, such as web servers (Apache, etc.), messaging systems, and databases. Apache – Installation, configuration, and maintenance of Apache web servers and Tomcat infrastructure. Application Deployment WebLogic – Deploying and managing applications (, including WAR, EAR, and JAR files) on the WebLogic Server, ensuring proper configuration and integration. Apache – Deploying and configuring web applications for serving static content and routing requests. Apache/WebLogic – Performing capacity planning and forecasting for the application and web infrastructure. Performance Tuning and Optimization WebLogic – Optimizing the performance of WebLogic Server and applications through techniques like heap size configuration, thread dump analysis, and other performance tuning methods. Apache/WebLogic – Monitoring server performance, identifying bottlenecks, and implementing optimizations to improve efficiency and responsiveness. Security Administration WebLogic – Implementing and managing security configurations/realms, including SSL/TLS, user authentication, and access control - users, groups, roles, and policies. Apache – Managing security and access controls for the Apache environment and implementing secure coding practices Automation and Scripting Developing and implementing scripts (e.g., WLST) to automate routine tasks and manage the WebLogic/Apache environment, including integration with Elastic, Splunk and ServiceNow. Developing and implementing automation strategies, including CI/CD pipelines, and analyzing processes for improvements. Leverage Oracle Web Management Pack for automation. Monitoring and Alerting WebLogic – Monitoring server health, performance metrics, and logs, and tuning WebLogic configurations for optimal performance. WebLogic – Utilizing monitoring tools (e.g., Nagios, Zabbix) to track server health and performance, and troubleshooting issues and outages. Apache - Monitoring the Apache environment to resolve issues and tracking website performance through analytics. Cloud Infrastructure & Automation Implement cloud infrastructure policies, standards, and best practices, ensuring cloud environment adherence to security and regulatory requirements. Design, deploy and optimize cloud-based infrastructure using Azure/AWS services that meet the performance, availability, scalability, and reliability needs of our applications and services. Drive troubleshooting of cloud infrastructure issues, ensuring timely resolution and root cause analysis by partnering with global cloud center of excellence & enterprise application teams, and PepsiCo premium cloud partners (Microsoft, AWS, Apache & Oracle). Establish and maintain effective communication and collaboration with internal and external stakeholders, including business leaders, developers, customers, and vendors. Develop Infrastructure as Code (IaC) to automate provisioning and management of cloud resources. Write and maintain scripts for automation and deployment using PowerShell, Python, or Azure/AWS CLI. Work with stakeholders to document architectures, configurations, and best practices. Knowledge of cloud security principles around data protection, identity and access Management (IAM), compliance and regulatory, threat detection and prevention, disaster recovery and business continuity. Qualifications: A bachelor’s degree in computer science or a related field, or equivalent experience. 10 to 12 years of experience in Apache/WebLogic server environment, including architecture, operations and security, with at least 6 to 8 years of experience leading cloud migration/modernization. Extensive hands-on experience on WebLogic: server architecture deployment (deployment plans/descriptors) administration Java and J2EE technologies JMS and messaging bridges relational databases (e.g., Oracle, Exadata) WebLogic Diagnostics Framework (WLDF), Oracle Web Management Packs MBeans and JMX WLST, shell scripting integration with cloud platforms (AWS, Azure) containerization using Docker and Kubernetes Extensive hands-on experience on Apache: web server administration including IIS and Tomcat configuring Apache to serve static contents using Alias, Directory Directives and Caching routing dynamic requests using URL Rewrite (simple redirect and complex URL manipulation) and Virtual Hosts performance tuning modules, operating system settings CDN integration with cloud platforms (AWS, Azure) containerization using Docker and Kubernetes Extensive hands-on experience with Windows and Linux administration skills. Extensive hands-on experience with web servers (e.g., Apache, Nginx), security realm configuration including LDAP and custom security providers. Extensive hands-on experience leading cloud migration and modernization with experience/understanding in: AWS Elastic Beanstalk, Amazon EC2, ECS/EKS, Docker, AWS Application Migration Service, microservice refactoring. Azure WebLogic server, Virtual Machines, AKS Oracle certification in WebLogic, Azure/AWS is preferred. Extensive hands-on experience implementing high availability and disaster recovery for Apache/WebLogic or with other cloud platform technologies. Deep knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, F5 load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as Bash, PERL, PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Strong self-organization, time management and prioritization skills An elevated level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Hyderābād

On-site

Date Posted: 2025-07-28 Country: India Location: Phase-II, 7th Floor, Block-III, DLF Commercial Developer Ltd., Plot No. 129 to 132, APHB Colony, Gachibowli, Hyderabad, Telengana, India Position Role Type: Unspecified Job Description: This is a hands-on functional role requiring knowledge of Electronic Data Interchange (EDI) and B2B Trading Partner Management, Business Requirement Mapping, and Communication to technical and leadership teams. The Senior EDI / B2B Functional Analyst supports: Architecting solutions for EDI and B2B integrations Developing EDI and B2B connections between SAP ECC partner applications Consulting with Collins leaders, business relationship managers, customers and suppliers regarding EDI and B2B best practices and standards Maintaining, monitoring, and troubleshooting the EDI / B2B applications Primary Responsibilities: Collaborates with external partners, 3rd party providers and internal stakeholders to identify and understand the functional or business requirements for enhancements and/or new functionality Writes user stories and business requirements and translates to functional specification and EDI mapping documents for EDI standards X12, XML, EDIFACT, SPEC2000 Gathers user feedback on the accuracy, efficiency and functionality of the system Adjusting and customizing the system based on user feedback and transaction compliance Deploy completed systems and providing maintenance support Training end-users on the proper use of the system Sets up and tests Inbound & Outbound EDI trading partners Builds relationships with EDI partners/vendors Proactively monitors inbound and outbound EDI activity in the SAP production System including Purchase Orders, Advance Ship Notices, Invoicing, 997 Functional Acknowledgement and associated software Analyzes problems and works with technical lead(s) to determine solutions Creates and maintains documentation for mapping and process flows Good knowledge of supply chain processes, order-to-cash, procure-to-pay, and logistics workflows. Need to bridge the gap between technical development teams and business stakeholders to ensure seamless data exchange across trading partners, enabling efficient supply chain and business processes. Basic Qualifications: 7+ years of hands-on technical experience Experience with EDI, B2B system integrations Experience with EDI, B2B providers Familiar with ERP systems i.e. SAP and S4 HANA Requirement mapping Working collaboratively with others Excellent verbal and written communication skills needed to interface with leadership, infrastructure, security, development, external vendors and customers. Troubleshooting and resolving issues promptly, even under pressure Preferred Qualifications: Work experience with large aerospace OEMs Highly analytical and focused on continuous improvement Agile project experience Understanding of Azure DevOps, pipelines and deployment Communicates effectively, manages multiple tasks, and follows through on commitments Proficiency with Sterling Integrator, Open Text/GXS TrustedLink Enterprise, Bizlink, BIZ Mapper, and Boomi Knowledge of EDI Messages, X12, EDIFACT, XML Knowledge of communication, authentication and authorization protocols used for AS2, SFTP, MFT, PGP, SSH Understanding of DNS, TCP / IP protocols, clustering, load balancing, firewalls Understanding of SAP ECC Netweaver technologies, i.e. IDOCs, partner profiles, tRFCs, SAP ALE RTX adheres to the principles of equal employment. All qualified applications will be given careful consideration without regard to ethnicity, color, religion, gender, sexual orientation or identity, national origin, age, disability, protected veteran status or any other characteristic protected by law. Privacy Policy and Terms: Click on this link to read the Policy and Terms

Posted 3 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Ford/GDIA Mission and Scope: At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow’s transportation. Creating the future of smart mobility requires the highly intelligent use of data, metrics, and analytics. That’s where you can make an impact as part of our Global Data Insight & Analytics team. We are the trusted advisers that enable Ford to clearly see business conditions, customer needs, and the competitive landscape. With our support, key decision-makers can act in meaningful, positive ways. Join us and use your data expertise and analytical skills to drive evidence-based, timely decision-making. The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. About the Role: You would be part of FCSD analytics team. As a Data Scientist on the team, you will collaborate within the team and work with business partners to understand business problems and explore data from various sources in GCP-Data Factory, wrangle them to develop solutions using AI/ML algorithms to provide actionable insights that deliver key results to Ford. The potential candidate should have hands-on experience in building statistical/machine learning models adhering to the best practices of development and deployment in cloud environment. This role requires a solid problem-solving skill, business acumen, and passion for leveraging data science/AI skills to drive business results. Responsibilities Job Responsibilities Build an in-depth understanding of the business domain and data sources. Extract, Analyse data from database/data warehouse to gain insights, discover trends and patterns with clear objectives in mind. Design and implement scalable analytical solutions in Google cloud environment. Work closely with Product Owner, Product Manager, Software engineers and Data engineers to build products in agile environment. Operationalize AI/ML/LLM models by integrating with upstream and downstream business processes. Communicate results to business teams through effective presentations. Work with business partners through problem formulation, data management, solutions development, operationalization, and solutions management Identify opportunities to build analytical solutions driving business value, leveraging various data sources. Qualifications Qualifications: At least 2 years of relevant work experience in solving business problems using data science Bachelors/master’s degree in quantitative domain, Statistics, Computer science, Mathematics, Engineering with MBA from a premier institute (BE,MS,MBA, BSc/MSc -Computer science/Statistics) or any other equivalent 2+ years of experience with SQL, Python delivering analytical solutions in production environment. At least 1 year of experience working in Cloud environment (GCP or AWS or Azure) 2+ years of experience in conducting statistical data analysis (EDA, forecasting, clustering, etc.,) and machine learning techniques (Classification/Regression, NLP) Technical Skills: Proficient in BigQuery/SQL, Python Advanced SQL knowledge to handle large data, optimize queries. Working knowledge in GCP environment (Big Query, Vertex AI) to develop and deploy machine Learning models Nice to have: Exposure to Gen AI/LLM Functional Skills: Understanding and formulating business problem statements Convert Business Problem statement into data science problems. Self-motivated with excellent verbal and written skills Highly credible in organizational, time management and decision making. Excellent Problem-Solving and Interpersonal skills

Posted 3 days ago

Apply

0 years

2 Lacs

Gurgaon

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Senior Data Scientist AI Garage is responsible for establishing Mastercard as an AI powerhouse. AI will be leveraged and implemented at scale within Mastercard providing a foundational, competitive advantage for the future. All internal processes, all products and services will be enabled by AI continuously advancing our value proposition, consumer experience, and efficiency. Opportunity Join Mastercard's AI Garage @ Gurgaon, a newly created strategic business unit executing on identified use cases for product optimization and operational efficiency securing Mastercard's competitive advantage through all things AI. The AI professional will be responsible for the creative application and execution of AI use cases, working collaboratively with other AI professionals and business stakeholders to effectively drive the AI mandate. Role Ensure all AI solution development is in line with industry standards for data management and privacy compliance including the collection, use, storage, access, retention, output, reporting, and quality of data at Mastercard Adopt a pragmatic approach to AI, capable of articulating complex technical requirements in a manner this is simple and relevant to stakeholder use cases Gather relevant information to define the business problem interfacing with global stakeholders Creative thinker capable of linking AI methodologies to identified business challenges Identify commonalities amongst use cases enabling a microservice approach to scaling AI at Mastercard, building reusable, multi-purpose models Develop AI/ML solutions/applications leveraging the latest industry and academic advancements Leverage open and closed source technologies to solve business problems Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda Partner with technical teams to implement developed solutions/applications in production environment Support a learning culture continuously advancing AI capabilities All About You Experience Experience in the Data Sciences field with a focus on AI strategy and execution and developing solutions from scratch Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: o Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL o Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning o Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil o Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing o Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Exposure or experience using collaboration tools such as: o Confluence (Documentation) o Bitbucket/Stash (Code Sharing) o Shared Folders (File Sharing) o ALM (Project Management) Knowledge of payments industry a plus Experience with SAFe (Scaled Agile Framework) process is a plus Effectiveness Effective at managing and validating assumptions with key stakeholders in compressed timeframes, without hampering development momentum Capable of navigating a complex organization in a relentless pursuit of answers and clarity Enthusiasm for Data Sciences embracing the creative application of AI techniques to improve an organization's effectiveness Ability to understand technical system architecture and overarching function along with interdependency elements, as well as anticipate challenges for immediate remediation Ability to unpack complex problems into addressable segments and evaluate AI methods most applicable to addressing the segment Incredible attention to detail and focus instilling confidence without qualification in developed solutions Core Capabilities Strong written and oral communication skills Strong project management skills Concentration in Computer Science Some international travel required Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 3 days ago

Apply

0 years

2 Lacs

Gurgaon

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Manager Data Scientist AI Garage is responsible for establishing Mastercard as an AI powerhouse. AI will be leveraged and implemented at scale within Mastercard providing a foundational, competitive advantage for the future. All internal processes, all products and services will be enabled by AI continuously advancing our value proposition, consumer experience, and efficiency. Opportunity Join Mastercard's AI Garage @ Gurgaon, a newly created strategic business unit executing on identified use cases for product optimization and operational efficiency securing Mastercard's competitive advantage through all things AI. The AI professional will be responsible for the creative application and execution of AI use cases, working collaboratively with other AI professionals and business stakeholders to effectively drive the AI mandate. Role Ensure all AI solution development is in line with industry standards for data management and privacy compliance including the collection, use, storage, access, retention, output, reporting, and quality of data at Mastercard Adopt a pragmatic approach to AI, capable of articulating complex technical requirements in a manner this is simple and relevant to stakeholder use cases Gather relevant information to define the business problem interfacing with global stakeholders Creative thinker capable of linking AI methodologies to identified business challenges Identify commonalities amongst use cases enabling a microservice approach to scaling AI at Mastercard, building reusable, multi-purpose models Develop AI/ML solutions/applications leveraging the latest industry and academic advancements Leverage open and closed source technologies to solve business problems Ability to work cross-functionally, and across borders drawing on a broader team of colleagues to effectively execute the AI agenda Partner with technical teams to implement developed solutions/applications in production environment Support a learning culture continuously advancing AI capabilities All About You Experience Experience in the Data Sciences field with a focus on AI strategy and execution and developing solutions from scratch Demonstrated passion for AI competing in sponsored challenges such as Kaggle Previous experience with or exposure to: o Deep Learning algorithm techniques, open source tools and technologies, statistical tools, and programming environments such as Python, R, and SQL o Big Data platforms such as Hadoop, Hive, Spark, GPU Clusters for deep learning o Classical Machine Learning Algorithms like Logistic Regression, Decision trees, Clustering (K-means, Hierarchical and Self-organizing Maps), TSNE, PCA, Bayesian models, Time Series ARIMA/ARMA, Recommender Systems - Collaborative Filtering, FPMC, FISM, Fossil o Deep Learning algorithm techniques like Random Forest, GBM, KNN, SVM, Bayesian, Text Mining techniques, Multilayer Perceptron, Neural Networks – Feedforward, CNN, LSTM’s GRU’s is a plus. Optimization techniques – Activity regularization (L1 and L2), Adam, Adagrad, Adadelta concepts; Cost Functions in Neural Nets – Contrastive Loss, Hinge Loss, Binary Cross entropy, Categorical Cross entropy; developed applications in KRR, NLP, Speech and Image processing o Deep Learning frameworks for Production Systems like Tensorflow, Keras (for RPD and neural net architecture evaluation), PyTorch and Xgboost, Caffe, and Theono is a plus Exposure or experience using collaboration tools such as: o Confluence (Documentation) o Bitbucket/Stash (Code Sharing) o Shared Folders (File Sharing) o ALM (Project Management) Knowledge of payments industry a plus Experience with SAFe (Scaled Agile Framework) process is a plus Effectiveness Effective at managing and validating assumptions with key stakeholders in compressed timeframes, without hampering development momentum Capable of navigating a complex organization in a relentless pursuit of answers and clarity Enthusiasm for Data Sciences embracing the creative application of AI techniques to improve an organization's effectiveness Ability to understand technical system architecture and overarching function along with interdependency elements, as well as anticipate challenges for immediate remediation Ability to unpack complex problems into addressable segments and evaluate AI methods most applicable to addressing the segment Incredible attention to detail and focus instilling confidence without qualification in developed solutions Core Capabilities Strong written and oral communication skills Strong project management skills Concentration in Computer Science Some international travel required #AI1 Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 3 days ago

Apply

5.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1435552 ServicesGurgaon Posted On 28 Jul 2025 End Date 11 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D013514 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill JAVA HTML Minimum Qualification B.COM Certification No data available Job Description Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About the Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good to have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry Workflow Workflow Type Back Office

Posted 3 days ago

Apply

0 years

0 Lacs

Gurgaon

On-site

Job Description: About AML RightSource We are AML RightSource, the leading technology-enabled managed services firm focused on fighting financial crime for our clients and the world. Headquartered in Cleveland, Ohio, and operating across the globe, we are a trusted partner to our financial institution, FinTech, money service business, and corporate clients. Using a blend of highly trained anti-financial crime professionals, cutting-edge technology tools, and industry-leading consultants, we help clients with their AML/BSA, transaction monitoring, client onboarding (KYC), enhanced due diligence (EDD), and risk management needs. We support clients in meeting day-to-day compliance tasks, urgent projects, and strategic changes. Globally, our staff of more than 4,000 highly trained analysts and subject matter experts is the industry's largest group of full-time compliance professionals. Together with our clients, we are Reimagining Compliance. Core Competencies & Expertise AML & KYC Compliance – Conducting CDD and EDD on customers, including high-risk entities, politically exposed persons (PEPs), and businesses operating in crypto-related activities. Crypto Transaction Monitoring – Investigating on-chain and off-chain transactions to identify potential risks related to mixers, tumblers, darknet markets, and high-risk jurisdictions. Blockchain Analytics Tools – Hands-on experience using: Chainalysis – Wallet clustering, transaction tracing, exposure risk scoring. TRM Labs – Address screening, smart contract analytics, fraud detection. Gemini – Exchange compliance monitoring, blockchain forensic investigations. Sanctions & Adverse Media Screening – Screening wallets, counterparties, and entities against OFAC, UN, EU, and FATF watchlists. Regulatory Compliance – Strong understanding of FinCEN, FATF, SEC, FCA, and MAS crypto compliance frameworks. SAR/STR Filing – Drafting and submitting Suspicious Activity Reports (SARs) for regulatory reporting. Risk Assessment & Escalation – Providing insights on crypto-specific typologies, including DeFi exploits, NFT wash trading, and stablecoin risks. Cross-functional Collaboration – Working with internal fraud teams, law enforcement, and regulators to investigate and mitigate crypto financial crimes. Key Responsibilities Customer Due Diligence (CDD) & Enhanced Due Diligence (EDD) – Crypto-Specific Conduct CDD/EDD for crypto customers, including individuals, exchanges, OTC desks, and institutional clients. Assess the source of wealth and source of funds (SOW/SOF) for crypto-related transactions. Verify wallet addresses, transaction histories, and counterparties for potential illicit activity. Utilize Chainalysis Reactor/TRM Labs to investigate high-risk wallet interactions. Crypto Transaction Monitoring & Risk Detection Monitor real-time crypto transactions for suspicious patterns using Gemini, Chainalysis KYT, and TRM Labs. Detect and analyze trends in illicit activities, such as mixing services, cross-chain swaps, and sanction evasion techniques. Investigations & Reporting Conduct blockchain forensics on crypto currency to track fund flows. File Suspicious Activity Reports (SARs) / Suspicious Transaction Reports (STRs) for money laundering, fraud, and terrorist financing cases. Sanctions & Adverse Media Screening Screen crypto wallet addresses and counterparties against OFAC SDN, EU, UN, and other sanctions lists. Conduct adverse media research on high-risk crypto businesses. Regulatory Compliance & Risk Management Ensure compliance with FATF Travel Rule, FinCEN requirements, and global AML/CFT regulations. Stay updated on crypto-related enforcement actions and emerging risks. Preferred Certifications Crypto-Specific Certifications: Certified Cryptocurrency Investigator (CCI) Chainalysis Cryptocurrency Fundamentals Certification (CCFC) TRM Academy Certifications AML & Compliance Certifications: Certified Anti-Money Laundering Specialist (CAMS) ICA Advanced Certificate in AML & Crypto Compliance Certified Financial Crime Specialist (CFCS) AML RightSource is committed to fostering a diverse work environment and is proud to be an equal opportunity employer. We provide equal employment opportunities to all qualified applicants without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Key Responsibilities Design, develop, and maintain scalable data pipelines using AWS services and Snowflake. Build and manage data transformation workflows using dbt. Collaborate with data analysts, data scientists, and business stakeholders to deliver clean, reliable, and well-documented datasets. Optimize Snowflake performance through clustering, partitioning, and query tuning. Implement data quality checks, testing, and documentation within dbt. Automate data workflows and integrate with CI/CD pipelines. Ensure data governance, security, and compliance across cloud platforms. Required Skills & Qualifications Strong experience with Snowflake (data modeling, performance tuning, security). Proficiency in dbt (models, macros, testing, documentation). Solid understanding of AWS services such as S3, Lambda, Glue, and IAM. Experience with SQL and scripting languages (e.g., Python). Familiarity with version control systems (e.g., Git) and CI/CD tools. Strong problem-solving skills and attention to detail.

Posted 3 days ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Pune

Work from Office

Introducing Thinkproject Platform Pioneering a new era and offering a cohesive alternative to the fragmented landscape of construction software, Thinkproject seamlessly integrates the most extensive portfolio of mature solutions with an innovative platform, providing unparalleled features, integrations, user experiences, and synergies, By combining information management expertise and in-depth knowledge of the building, infrastructure, and energy industries, Thinkproject empowers customers to efficiently deliver, operate, regenerate, and dispose of their built assets across their entire lifecycle through a Connected Data Ecosystem, We are seeking a hands-on Applied Machine Learning Engineer to join our team and lead the development of ML-driven insights from historical data in our contracts management, assets management and common data platform This individual will work closely with our data engineering and product teams to design, develop, and deploy scalable machine learning models that can parse, learn from, and generate value from both structured and unstructured contract data, You will use BigQuery and its ML capabilities (including SQL and Python integrations) to prototype and productionize models across a variety of NLP and predictive analytics use cases Your work will be critical in enhancing our platforms intelligence layer, including search, classification, recommendations, and risk detection, What your day will look like Key Responsibilities Model Development: Design and implement machine learning models using structured and unstructured historical contract data to support intelligent document search, clause classification, metadata extraction, and contract risk scoring, BigQuery ML Integration: Build, train, and deploy ML models directly within BigQuery using SQL and/or Python, leveraging native GCP tools ( e-g , Vertex AI, Dataflow, Pub/Sub), Data Preprocessing & Feature Engineering: Clean, enrich, and transform raw data ( e-g , legal clauses, metadata, audit trails) into model-ready features using scalable and efficient pipelines, Model Evaluation & Experimentation: Conduct experiments, model validation, A/B testing, and iterate based on precision, recall, F1-score, RMSE, etc Deployment & Monitoring: Operationalize models in production environments with monitoring, retraining pipelines, and CI/CD best practices for ML (MLOps), Collaboration: Work cross-functionally with data engineers, product managers, legal domain experts, and frontend teams to align ML solutions with product needs, What you need to fulfill the role Skills And Experience Education: Bachelors or Masters degree in Computer Science, Machine Learning, Data Science, or a related field, ML Expertise: Strong applied knowledge of supervised and unsupervised learning, classification, regression, clustering, feature engineering, and model evaluation, NLP Experience: Hands-on experience working with textual data, especially in NLP use cases like entity extraction, classification, and summarization, GCP & BigQuery: Proficiency with Google Cloud Platform, especially BigQuery and BigQuery ML; comfort querying large-scale datasets and integrating with external ML tooling, Programming: Proficient in Python and SQL; familiarity with libraries such as Scikit-learn, TensorFlow, PyTorch, Keras, MLOps Knowledge: Experience with model deployment, monitoring, versioning, and ML CI/CD best practices, Data Engineering Alignment: Comfortable working with data pipelines and tools like Apache Beam, Dataflow, Cloud Composer, and pub/sub systems, Version Control: Strong Git skills and experience collaborating in Agile teams, Preferred Qualifications Experience working with contractual or legal text datasets, Familiarity with document management systems, annotation tools, or enterprise collaboration platforms, Exposure to Vertex AI, LangChain, RAG-based retrieval, or embedding models for Gen AI use cases, Comfortable working in a fast-paced, iterative environment with changing priorities, What we offer Lunch 'n' Learn Sessions I Women's Network I LGBTQIA+ Network I Coffee Chat Roulette I Free English Lessons I Thinkproject Academy I Social Events I Volunteering Activities I Open Forum with Leadership Team (Tp Caf) I Hybrid working I Unlimited learning We are a passionate bunch here To join Thinkproject is to shape what our company becomes We take feedback from our staff very seriously and give them the tools they need to help us create our fantastic culture of mutual respect We believe that investing in our staff is crucial to the success of our business, Your contact: Mehal Mehta Please submit your application, including salary expectations and potential date of entry, by submitting the form on the next page, Working at thinkproject think career think ahead, Show

Posted 3 days ago

Apply

12.0 years

0 Lacs

Noida

On-site

Position Summary AEM Technical Architect (TA) position is a client-facing role requiring both technical and business/marketing knowledge and skills. TA works to gather & understand Client’s unique business requirements and provide expert guidance by sharing best practices & recommendations to our Customer/Implementation Partners in building customized solutions to meet their business reporting needs through AEM platform. TA also performs quality checks to ensure that the implementation cycle follows industry best practices, flag all technical issues and highlight risks when they arise. TA works with Clients to strategize and drive business value from the platform and enable them to adopt & scale-up in their maturity roadmap. It is a technical advisory role with certain hands-on support and requires a solid technical acumen in digital platform implementation and involves constant customer interaction. What you'll do Be a recognized expert/SME for internal and regional stakeholders. Take leadership during project delivery and own Project Management responsibilities. Act as a Team Lead for small to large, multi-solution consulting engagements which may involve interactions with multiple teams from, Client, or partner organizations Build trusted advisor relationships with our Clients & implementation Partners. Adapt to and work effectively with a variety of clients and in challenging situations, establishing credibility and trust quickly. Work on own initiative without a need for directions for most consulting activities. Gain understanding of client business requirements, key performance indicators and other functional and/or technical use cases. Review overall solution architecture and custom design solutions for AEM (Sites, Assets and Forms), technical approach and go-live readiness. Review assessments & recommendations document and liaise with technical consultants. Communicate effectively to Customer/Implementation Partner teams on AEM assessments & recommendations, gaps and risks. Provide advisory to key stakeholders with industry best practices & recommendations throughout the implementation process to drive Customer success and ROI. Interact frequently with Client/Implementation Partner teams - marketers, analysts, web developers, QA team, and C-level executives, mainly via conference calls or emails. Manage customer expectations of response time & issue resolution and keep projects on schedule and within scope. Troubleshoot and reproduce the technical problems reported by customers and define workarounds. Effectively analyze complex project issues, devise optimal solutions, and facilitate the recommendations to the Clients and Partners. Proactively maintain the highest level of technical expertise by staying current on DX technologies and solutions through internally and externally available learning opportunities as well as self-study. Provide thought leadership to the team and wider consulting community helping to set future strategic direction. Participate within the technical community to develop and share best practices and processes. Enable existing/new team members with new product features, delivery processes, project-based learnings and support with any issues or queries. Foster teamwork among consultants and cross functional teams. Technical writing and PowerPoint presentation creation. What you need to succeed Must Have – 12+ years of experience as a client-facing consultant with strong experience in AEM implementation & understanding in areas – o UI technologies like JQuery, Java Script, HTML 5, CSS. o Technologies like Java EE, Servlets, JSP, Tag libraries, and JSTL skills. o Dispatcher Configuration, Clustering, CRX repository, Workflows, Replication and Performance management. o Application development, distributed application development and Internet/Intranet based database applications. o AEM sites/assets/forms deployment and migration. o AEM Backend Development like Sling Servlets, OSGi Components and JCR Queries o Core frameworks such as Apache Sling and Apache Felix. o CI/CD tools like Maven, Jenkins. o Code Quality and Security tools like SONAR. o Touch UI, Sightly (HTL) and Sling Models. o Software design patterns Leading consulting teams in Technical Architect capacity Problem analysis and resolution of technical problems. Experience working effectively on multiple Consulting engagements. Ability to handle clients professionally during all interfaces. Experience presenting in front of various Client-side audiences. Exceptional organizational, presentation, and communication skills - both verbal and written. Must be self-motivated, responsive, professional and dedicated to customer success. Possess an innovative, problem-solving, and solutions-oriented mindset. Demonstrated ability to learn quickly, be a team player, and manage change effectively. Preferably a degree in Computer Science or Engineering. Preference will be given for – Experience as Techno Managerial role in a large consulting organization with project/people management responsibilities. Knowledge on latest AEM features and on new cloud technology – AEMaaCS. Experience on Cloud Manager deployment tool. Certified ScrumMaster and/or PMP Certification. Knowledge on Agile methodologies. Good understanding of integration of AEM with other DX solutions – Commerce, Analytics, Target, Audience Manager etc. would be plus. Experience presenting in front of various technical and business audiences. Ability to work in extended hours to overlap with North America timings Job Type: Full-time Application Question(s): How many years of experience as a client-facing consultant with strong experience in AEM implementation & understanding in areas – o UI technologies like JQuery, Java Script, HTML 5, CSS? Work Location: In person

Posted 3 days ago

Apply

4.0 - 7.0 years

8 - 9 Lacs

Noida

On-site

Roles and ResponsibilitiesAssistant Managers must understand client objectives and collaborate with the Project Lead to design effective analytical frameworks. They should translate requirements into clear deliverables with defined priorities and constraints. Responsibilities include managing data preparation, performing quality checks, and ensuring analysis readiness. They should implement analytical techniques and machine learning methods such as regression, decision trees, segmentation, forecasting, and algorithms like Random Forest, SVM, and ANN.They are expected to perform sanity checks and quality control of their own work as well as that of junior analysts to ensure accuracy. The ability to interpret results in a business context and identify actionable insights is critical. Assistant Managers should handle client communications independently and interact with onsite leads, discussing deliverables and addressing queries over calls or video conferences.They are responsible for managing the entire project lifecycle from initiation to delivery, ensuring timelines and budgets are met. This includes translating business requirements into technical specifications, managing data teams, ensuring data integrity, and facilitating clear communication between business and technical stakeholders. They should lead process improvements in analytics and act as project leads for cross-functional coordination.Client ManagementThey serve as client leads, maintaining strong relationships and making key decisions. They participate in deliverable discussions and guide project teams on next steps and execution strategy.Technical RequirementsAssistant Managers must know how to connect databases with Knime (e.g., Snowflake, SQL) and understand SQL concepts such as joins and unions. They should be able to read/write data to and from databases and use macros and schedulers to automate workflows. They must design and manage Knime ETL workflows to support BI tools and ensure end-to-end data validation and documentation.Proficiency in PowerBI is required for building dashboards and supporting data-driven decision-making. They must be capable of leading analytics projects using PowerBI, Python, and SQL to generate insights. Visualizing key findings using PowerPoint or BI tools like Tableau or Qlikview is essential.Ideal CandidateCandidates should have 4–7 years of experience in advanced analytics across Marketing, CRM, or Pricing in Retail or CPG. Experience in other B2C domains is acceptable. They must be skilled in handling large datasets using Python, R, or SAS and have worked with multiple analytics or machine learning techniques. Comfort with client interactions and working independently is expected, along with a good understanding of consumer sectors such as Retail, CPG, or Telecom.They should have experience with various data formats and platforms including flat files, RDBMS, Knime workflows and server, SQL Server, Teradata, Hadoop, and Spark—on-prem or in the cloud. Basic knowledge of statistical and machine learning techniques like regression, clustering, decision trees, forecasting (e.g., ARIMA), and other ML models is required.Other SkillsStrong written and verbal communication is essential. They should be capable of creating client-ready deliverables using Excel and PowerPoint. Knowledge of optimization methods, supply chain concepts, VBA, Excel Macros, Tableau, and Qlikview will be an added advantage. Qualifications Engineers from top tier institutes (IITs, DCE/NSIT, NITs) or Post Graduates in Maths/Statistics/OR from top Tier Colleges/UniversitiesMBA from top tier B-schools Job Location

Posted 3 days ago

Apply

3.0 years

0 Lacs

Noida

On-site

Position Overview: Here at ShyftLabs, we are looking for an experienced Data Scientist who can derive performance improvement and cost efficiency in our product through a deep understanding of the ML and infra system, and provide a data-driven insight and scientific solution. ShyftLabs is a growing data product company that was founded in early 2020, and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Data Analysis and Research: Analyzing a large dataset with queries and scripts, extracting valuable signals out of noise, and producing actionable insights into how we could complete and improve a complex ML and bidding system. Simulation and Modelling: Validating and quantifying the efficiency and performance gain from hypotheses through rigorous simulation and modelling. Experimentation and Causal Inference: Developing a robust experiment design and metric framework, and providing reliable and unbiased insights for product and business decision making. Basic Qualifications: Master's degree in a quantitative discipline or equivalent. 3+ years minimum professional experience. Distinctive problem-solving skills, good at articulating product questions, pulling data from large datasets and using statistics to arrive at a recommendation. Excellent verbal and written communication skills, with the ability to present information and analysis results effectively. Ability to build positive relationships within ShyftLabs and with our stakeholders, and work effectively with cross-functional partners in a global company. Statistics: Must have strong knowledge and experience in experimental design, hypothesis testing, and various statistical analysis techniques such as regression or linear models. Machine Learning: Must have a deep understanding of ML algorithms (i.e., deep learning, random forest, gradient boosted trees, k-means clustering, etc.) and their development, validation, and evaluation. Programming: Experience with Python, R, or other scripting language, and database language (e.g. SQL) or data manipulation (e.g. Pandas). We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 3 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Bay6.ai is an innovative company at the forefront of AI technology, delivering cutting-edge solutions that help businesses leverage the power of artificial intelligence. Bay6.ai is seeking an experienced and dynamic Senior/Lead Data Scientist to join our growing team. This individual will be responsible for the design and development of AI/ML predictive models that augment our decision support systems using customers' historical data sets. As a Senior Data Scientist, you will work closely with cross-functional teams to ensure that our AI driven solutions provide actionable insights and value to our clients. Job Responsibilities 1. Lead and architect the design and development of AI/ML predictive models that significantly enhance decision-making processes for clients, leveraging their own historical data sets and industry insights. 2. Engage with senior executives and key stakeholders to fully understand their business needs, strategic objectives, and data requirements, ensuring the AI/ML models are precisely tailored to deliver maximum business value. 3. Oversee the implementation, testing, and validation of machine learning algorithms, ensuring that the models are not only accurate but also scalable, reliable, and robust for enterprise level production environments. 4. Collaborate cross-functionally with product managers, engineers, and data scientists to integrate AI/ML models into product ecosystem, optimizing performance and efficiency across multiple teams. 5. Provide expert analysis and interpretation of complex data sets from various sources, delivering actionable insights that inform business decisions and improve the accuracy and predictive power of models. 6. Stay ahead of emerging AI/ML trends and innovations, actively incorporating cutting-edge research, techniques, and best practices into the modeling and development processes. 7. Drive the technical vision and strategic direction for AI/ML initiatives, mentoring and guiding less experienced team members while also establishing best practices and a culture of continuous improvement. 8. Partner with business analysts, frontend, mid-tier, and backend developers to ensure the development, deployment, and performance of production applications that effectively utilize AI/ML models to deliver real-world impact. 9. Take ownership of solving complex, ambiguous problems with minimal supervision, applying advanced theoretical knowledge to conceptualize, simulate, and implement AI/ML solutions. 10. Define and manage data requirements and data quality assessments, orchestrating the extraction, transformation, and integration of data for analytical and modeling projects. Required Experience: 10+ years of experience in developing and deploying AI/ML predictive models in production environments, with a strong portfolio of successful enterprise-scale projects and solutions. ✓ Expertise in machine learning techniques, including but not limited to regression, classification, clustering, time series analysis, and deep learning. ✓ Advanced proficiency in programming languages such as Python, R, or similar, and mastery of machine learning libraries (e.g., TensorFlow, scikit-learn, PyTorch). ✓ Extensive experience with data wrangling, feature engineering, and working with large, complex, high volume datasets. ✓ Proven track record in model validation and testing, ensuring models are robust, reliable, and scalable across different use cases and environments. ✓ Deep understanding of the P&C Insurance domain, with specific experience in areas like claims prediction, risk modeling, pricing optimization, and customer segmentation. ✓ Strong ability to communicate complex technical concepts to senior, non-technical stakeholders, influencing decision-making at the highest levels. ✓ Significant hands-on experience working with both structured and unstructured data at scale, with expertise in cloud platforms, distributed computing, and big data technologies. ✓ 10+ years of experience in predictive model development, data mining

Posted 3 days ago

Apply

10.0 years

0 Lacs

Kochi, Kerala, India

On-site

Experience: 10+ years Role Type: Full-time Role Overview We are seeking a Senior Oracle and SQL Server SME to serve as the Track Lead within a global database managed services engagement. This is a hands-on leadership role involving performance tuning, optimization, HA/DR planning, and managing operations across a heterogeneous enterprise DB environment, including Oracle (EBS/Exadata), SQL Server, PostgreSQL (on-prem and AWS). The ideal candidate will have a deep understanding of enterprise performance diagnostics, observability tooling, compliance standards, and team coordination in a 24x7 support model. Key Responsibilities Track Leadership & Oversight. Lead and mentor the DBA team delivering 24x7 on-desk support across all platforms. Ensure SLA compliance for all database support tickets. Serve as the escalation point for P1/P2 incidents and drive root cause analysis. Coordinate daily operations, performance health checks, and scheduled activities including DR drills, code deployments, and database cloning. Performance Tuning & Optimization Perform deep-dive performance analysis using tools like AWR, ASH, ADDM, and SQL Trace/Monitor. Review and interpret AWR/ASH reports to identify inefficient SQL, wait events, I/O bottlenecks, and system load issues. Tune problematic queries, optimize indexes, analyze execution plans, and make schema design recommendations. Partner with development and application teams to implement long-term performance improvements. Monitoring, Automation & Observability Monitor database health using OEM, Quest Spotlight, and custom scripts. Establish and continuously refine baselines, KPIs, and automated alerts for availability and performance anomalies. Drive automation of routine DBA tasks including backups, patching, and reporting. Administration & Lifecycle Management Oversee patching, cloning, upgrades, and regular maintenance across ~220 databases. Manage backup/recovery strategies, database provisioning, and access control in accordance with SOX compliance. Maintain DR and HA setups, including Oracle DataGuard, SQL Server clustering, and storage replication for EBS. EBS & Middleware Stack Administer Oracle E-Business Suite (EBS) Database environments Ensure database support for applications like ODI, STAT, and replication tools like Oracle GoldenGate and Qlik. Required Skills & Experience 10+ years DBA experience, including Oracle 19c/Exadata/EBS and SQL Server administration. Hands-on tuning expertise using AWR, ADDM, ASH, Statspack, and advanced troubleshooting techniques. Strong knowledge of PostgreSQL (AWS-hosted administration is preferred). Expertise in GoldenGate, ODI, Qlik Replicate, and replication troubleshooting. Experience in HA/DR architecture, capacity planning, and SOX-compliant auditing. Scripting/automation using Shell, Python, Ansible, or similar tools. Soft Skills Proven leadership in managing global delivery models and multi-vendor teams. Strong communication skills and ability to interface with business, security, and application owners. Structured thinker with a focus on continuous improvement and automation-first mindset. Work Conditions & Expectations Responsible for 24x7 support, including failover drills, backups, and code deployment cycles. Coordination with OEM, Rimini, and Oracle Support for escalations and patching.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

On-site

Experience: 6 plus years Role Overview We are looking for a dynamic and detail-oriented SQL Server-focused DBA with experience in Oracle as a secondary skill. The ideal candidate will handle the administration, performance tuning, monitoring, and support of SQL Server environments primarily, while also assisting in Oracle environments when needed. This role is part of a global team managing business-critical databases across on-premises and cloud platforms. Key Responsibilities Provide end-to-end support for SQL Server environments including installation, configuration, security, patching, and upgrades. Administer and troubleshoot SQL Server Agent jobs, backup/restore operations, replication, clustering, and AlwaysOn availability groups. Lead or assist in performance tuning using: SQL Profiler, Execution Plans, Index Optimization, Wait Stats, and Dynamic Management Views (DMVs). Generate and interpret Performance Baselines, Monitor Dashboards, and Health Reports. Support database-related requests such as user management, data refreshes, schema changes, and security audits. Participate in AWR/ASH/Statspack analysis for Oracle environments as required. Monitor systems using Quest Spotlight, OEM, and custom alerting tools. Support high availability and disaster recovery (DR) readiness and drills. Collaborate with development, BI, and infrastructure teams to analyze and resolve performance or integration issues. Ensure compliance with SOX controls, database access standards, and audit logs. Required Technical Skills 3–5 years of solid hands-on experience in SQL Server administration, including HA/DR, tuning, and scripting. 1–2 years of working knowledge of Oracle DBs, including DataGuard concepts and RMAN. Proven experience in performance optimization in SQL Server (queries, indexes, plans). Familiarity with monitoring and observability tools such as Quest Spotlight, OEM. Knowledge of T-SQL, PowerShell scripting, and automation of routine maintenance tasks. Understanding of cloud-hosted SQL Server (e.g., in AWS EC2/RDS or Azure). Preferred/Good To Have Exposure to Oracle EBS environments or Oracle patching basics. Knowledge of Qlik Replicate, ODI, or GoldenGate. Hands-on experience with DataGuard, OEM performance packs, or clustered Oracle environments. Soft Skills Comfortable with 24x7 rotational shifts, collaborating across APAC, EMEA, and Americas. Strong analytical and troubleshooting skills with ability to communicate clearly. Ownership mindset, willingness to automate and document frequently executed tasks.

Posted 3 days ago

Apply

5.0 - 10.0 years

1 - 4 Lacs

Pune

Work from Office

JOB DESCRIPTION Role: BSP Engineer Working area: IVI /ADAS BSP Experience: ~6+ years Technical and domain skills: • Strong embedded development experience with good knowledge and hands-on experience in Design/Development/Debugging aspects of Board support package (BSP) on one or more of operating systems like in QNX / Linux / Android and Hypervisor embedded systems. • Must have hands-on development experience in C, C++, • Hands on experience on drivers in QNX / Linux / Android. • Hands on development experience in multi-threaded and multi-core environment. • Hands on experience with board bring up, schematics understanding • Good Communication and debugging skills • Skillset : C, C++. Linux, QNX / RTOS , UART, SPI, I2C, PCIE, Ethernet, Memory/Storage drivers, Hypervisor, Camera / Display / Audio • Experience in using debugging tools such as JTAG, Trace32, CRO , Logic Analyzers High level Roles and responsibilities: • Driver customization and Board bring up • Collaborate with cross-functional teams, engineering for smooth execution • Demonstrate strong analytical and problem-solving abilities and work closely with external customers to customize and launch their new products. Qualification and experience: • Bachelor’s or Master’s degree preferred. • Must have excellent communication skills, both written and verbal, and debugging skills • The ability to collaborate and integrate with existing team Role: The Main responsibility is to provide direct support to OEM customers with the design, development and debug of reference designs SW related issues and helping to customize/optimize software to meet the product requirements. The Candidate must quickly ramp-up onto an existing project, understand Automotive platform Software driver architecture, read/write technical specifications/requirements,.

Posted 3 days ago

Apply

4.0 - 9.0 years

1 - 4 Lacs

Bengaluru

Work from Office

Job Desrciption :: Role: Performance Engineer Working area: Experience: ~4+ years Technical and domain skills: • Must have hands-on development experience in C, C++ • Must have work experience in Automotive domain • Good to have exposure on QNX/RTOS/Android • Analysing Architecture and Metrics using the Performance Analysis tools to determine CPU utilization, CPU Frequencies, CPU process statistics, DDR profiling, Memory profiling, IO Profiling • Exposure on Reducing both CPU load and GPU load to minimize overall time consumption. • Exposure to CPU and GPU Libraries. • Exposure to Trace Analysis, CPU and GPU optimization. • Tools: Snapdragon Profiler, QProfiler, Sysprofiler, Sysmon • Experience in using debugging tools such as JTAG, Trace32 High level Roles and responsibilities: • CPU / GPU profiling, Optimization • Collaborate with cross-functional teams, engineering for smooth execution Qualification and experience: • Bachelor’s or master’s degree preferred. • Must have excellent communication skills, both written and verbal, and debugging skills • The ability to collaborate and integrate with existing team

Posted 3 days ago

Apply

4.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us - Attentive.ai is a leading provider of landscape and property management software powered by cutting-edge Artificial Intelligence (AI). Our software is designed to optimize workflows and help businesses scale up effortlessly in the outdoor services industry. Our Automeasure software caters to landscaping, snow removal, paving maintenance, and facilities maintenance businesses. We are also building Beam AI , an advanced AI engine focused on automating construction take-off and estimation workflows through deep AI. Beam AI is designed to extract intelligence from complex construction drawings, helping teams save time, reduce errors, and increase bid efficiency. Trusted by top US and Canadian sales teams, we are backed by renowned investors such as Sequoia Surge and InfoEdge Ventures." Position Description: As a Senior AI Research Engineer, you will be an integral part of our AI research team focused on transforming the construction industry through cutting-edge deep learning, computer vision and NLP technologies. You will contribute to the development of intelligent systems for automated construction take-off and estimation by working with unstructured data such as blueprint, drawings (including SVGs), and PDF documents. In this role, you will support the end-to-end lifecycle of AI-based solutions — from prototyping and experimentation to deployment in production. Your contributions will directly impact the scalability, accuracy, and efficiency of our products. Roles & Responsibilities Contribute to research and development initiatives focused on Computer Vision, Image Processing , and Deep Learning applied to construction-related data. Build and optimize models for extracting insights from documents such as blueprints, scanned PDFs, and SVG files . Contribute development of multi-modal models that integrate vision with language-based features (NLP/LLMs). Follow best data science and machine learning practices , including data-centric development, experiment tracking, model validation, and reproducibility. Collaborate with cross-functional teams including software engineers, ML researchers, and product teams to convert research ideas into real-world applications. Write clean, scalable, and production-ready code using Python and frameworks like PyTorch , TensorFlow , or HuggingFace . Stay updated with the latest research in computer vision and machine learning and evaluate applicability to construction industry challenges. Skills & Requirements 4-7 years of experience in applied AI/ML and research with a strong focus on Computer Vision and Deep Learning . Solid understanding of image processing , visual document understanding, and feature extraction from visual data. Familiarity with SVG graphics , NLP , or LLM-based architectures is a plus. Deep understanding of unsupervised learning techniques like clustering, dimensionality reduction , and representation learning. Proficiency in Python and ML frameworks such as PyTorch , OpenCV , TensorFlow , and HuggingFace Transformers . Hands-on experience with model optimization techniques (e.g., quantization , pruning , knowledge distillation ). - Good to have Experience with version control systems (e.g., Git ), project tracking tools (e.g., JIRA ), and cloud environments ( GCP , AWS , or Azure ). Familiarity with Docker , Kubernetes , and containerized ML deployment pipelines. Strong analytical and problem-solving skills with a passion for building innovative solutions; ability to rapidly prototype and iterate. Comfortable working in a fast-paced, agile, startup-like environment with excellent communication and collaboration skills. Why Work With Us? Be part of a visionary team building a first-of-its-kind AI solution for the construction industry . Exposure to real-world AI deployment and cutting-edge research in vision and multimodal learning. Culture that encourages ownership, innovation, and growth. Opportunities for fast learning, mentorship, and career progression.

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Key Responsibilities: • Develop and implement machine learning models and algorithms. • Work closely with project stakeholders to understand requirements and translate them into deliverables. • Utilize statistical and machine learning techniques to analyze and interpret complex data sets. • Stay updated with the latest advancements in AI/ML technologies and methodologies. • Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: • Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. • Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills: • Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python • Strong knowledge and experience in Generative AI/ LLM based development. • Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). • Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. • Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. • Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. • Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. • Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. • Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. • Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL • Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). • Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). • Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills • Knowledge and Experience in building knowledge graphs in production. • Understanding of multi-agent systems and their applications in complex problemsolving scenarios

Posted 3 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies